Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

April 20, 2019 10:36



Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: Post-surgical deaths in Scotland drop by a third, attributed to a checklist (April 17, 2019: 1017 points)

(1025) Post-surgical deaths in Scotland drop by a third, attributed to a checklist

1025 points 3 days ago by fanf2 in 72nd position

www.bbc.co.uk | Estimated reading time – 1 minutes | comments | anchor

Image copyright PA

Deaths after surgery in Scotland have dropped by more than a third, research suggests.

A study indicated a 37% decrease since 2008, which it attributed to the implementation of a safety checklist.

The 19-item list - which was created by the World Health Organization - is supposed to encourage teamwork and communication during operations.

The death rate fell to 0.46 per 100 procedures between 2000 and 2014, analysis of 6.8m operations showed.

Dr Atul Gawande, who introduced the checklist and co-authored the study, published in the British Journal of Surgery, said: 'Scotland's health system is to be congratulated for a multi-year effort that has produced some of the largest population-wide reductions in surgical deaths ever documented.'

Prof Jason Leitch, NHS Scotland's national clinical director, added: 'This is a significant study which highlights the reduction in surgical mortality over the last decade.

'While there are a number of factors that have contributed to this, it is clear from the research that the introduction of the WHO Surgical Safety Checklist in 2008 has played a key role.'




All Comments: [-] | anchor

rb808(2906) 3 days ago [-]

I'm not convinced about using a lot of checklists. I have no doubt when checklists first come in everyone follows them and there are some measurable improvements.

The problem I have twofold. A few years down the road will people still really follow the checklists religiously, or quickly just fill out the boxes.

The second part is people's jobs get dumbed down by this, I'm sure we've been in some situation where there is a structured system which you have no control, you just fill out forms, its disempowering so you learn not to think, just do your part of the system and hope it turns out OK. Its where bureaucracy and red tape starts. I've quit jobs like that because I found it demoralizing.

Will be interesting if there was a follow up 5 years later.

EDIT: Didn't you get the memo about your TPS reports? That is what I'm trying to avoid.

onion2k(2257) 3 days ago [-]

Its where bureaucracy and red tape starts.

What people call bureaucracy is process that they follow without knowing why it's there. Every single rule in a business is there because something went wrong, and someone didn't want it to happen again.

The answer to bureaucracy is transparency. If you explain why a rule has to be followed people don't mind that there's a rule.

(You also have to regularly review the rules and get rid of ones that don't make sense any more, but that's much less common that you'd think.)

tonfa(3703) 3 days ago [-]

The checklist was given elsewhere in the thread: https://apps.who.int/iris/bitstream/handle/10665/44186/97892... it really doesn't strike me as something that dumbs down the job...

JoeAltmaier(4118) 3 days ago [-]

It must become a culture. In Japan, there's a culture in the public transport business (trains etc) of 'point and say' where each responsible crew member points at a target e.g. safe door open zone marked on the pavement, and says out loud that its correct or not. Its a physical reenactment of a checklist? Anyway it becomes second nature if its a physical act, not a check-the-box-on-a-form thing. So that's how they manage it not getting skipped/becoming just paperwork.

PhaedrusV(10000) 3 days ago [-]

.... They haven't been using checklists? I've been getting so upset at the medical community recently. Pilots solved most of these problems decades ago. We've been trying to help them figure out these tools that we developed to save lives, but it seems like things are only changing one funeral at a time.

seansmccullough(10000) 3 days ago [-]

I find it very concerning, using a checklist seems like common sense.

_carl_jung(10000) 3 days ago [-]

They have, since 2008. You're right about pace though, unfortunately rolling out change into bureaucracy is painstakingly slow.

throwaway5752(3350) 3 days ago [-]

I have been in hospitals and seen them going through checklists with my own eyes. I don't think your assumption is completely accurate. Maybe it depends on the health system, the procedure, or the facility.

jplayer01(10000) 3 days ago [-]

Unfortunately, there's a lot of resistance to using checklists, from the nurses to doctors. I wish they'd just swallow their pride and worry more about how to improve than how it makes them look when their mistakes become visible and explicit to the people around them.

dsfyu404ed(10000) 3 days ago [-]

'500yr of progress being held back by 500yr of tradition' is highly applicable to the medical profession. There's a lot of arrogance floating around the medical profession as well that tends to put a damper on anything that might reduce human error.

Surgeons don't need checklists because they don't make mistakes, unlike those filthy three dimensional bus and truck drivers. /s

arkades(4095) 3 days ago [-]

The medical community uses checklists. They've been using checklists for a while. A number of studies found that they only help for a few months, while they're new.

That said, 'the medical community' is not a homogeneous monolith, and you can absolutely find regional variation in what checklists are used for, how detailed they are, how closely they're followed, how people are accountable for keeping to them, etc.

>I've been getting so upset at the medical community recently

http://www.paulgraham.com/submarine.html

Don't confuse media narratives with what's actually going on in the medical community. It's a sure-fire way to get (a) upset, and (b) entirely misled. Medical science has been a major target for media FUD for ages.

DanBC(149) 3 days ago [-]

They have been using checklists. This news is that they've measured how much harm has been prevented. It's the first country (Scotland) level research.

> The findings, reported in the April 17 British Journal of Surgery, are based on an analysis of 6.8 million operations performed between 2000 and 20014. The Surgical Safety Checklist was introduced in Scotland in 2008 as part of the Scottish Patient Safety Programme, and by 2014 the rate had decreased by 36.6 percent over six years to 0.46 deaths per 100 procedures. Researchers noted that this fall in death rates was not seen in patients who did not have surgery.

sithadmin(10000) 3 days ago [-]

>Pilots solved most of these problems decades ago.

In the safest planes (commercial airliners), pilots have systems recording their control inputs, and these can be used to directly attribute damage to/loss of the airframe to pilot malfeasance; the shame of a clear screwup will be clearly documented and in most cases, divulged to the public.

Medical professionals, on the other hand, seem to face a lower standard of accountability simply because it's truly far more difficult (if not impossible in some cases) to monitor all the variables associated with treating a patient compared to monitoring human-designed systems. I have to wonder if this epistemic quagmire where cause and effect are not necessarily tracked (and in some cases, not even truly understood) leads to a mindset that is more willing to write off negative outcomes as the result of external factors (comorbidity, patient age, patient adherence to physician instructions, even pure luck/probability) than tackle the tough problem of correlating personal behaviors and actions to distinct outcomes.

mbar84(10000) 3 days ago [-]

I think part of the reason pilots are so diligent is that it's their own lives on the line too...

turc1656(10000) 3 days ago [-]

The article is short so doesn't really delve into what happens after those deaths - specifically investigation into cause or the legal ramifications.

I was hoping to get that info because the first thing that popped into my mind was that people are/were apparently dying due to unnecessary reasons if a simple checklist prevents 37% of surgical deaths. If that's the case, the only way I could imagine classifying those deaths would be 'gross negligence' and therefore not subject to the protections of the legal agreements you sign before surgery.

DanBC(149) 3 days ago [-]

I can't talk about Scotland because they have a devolved system and I have no idea how it works up there.

I can talk a little bit about England.

There are two main ALBs (arms length bodies) that will be involved: NHS Resoulation (the organisation that handles legal cases) and NHS Improvement (the organisation that handles QI work). NHS England and NHS Improvement are merging and I don't know what the new name will be. NHS Resolution used to be called NHS Litigation Authority.

https://resolution.nhs.uk/

https://improvement.nhs.uk/

The information that NHSi has about 'Just Culture' is here: https://improvement.nhs.uk/resources/just-culture-guide/

If you have a look at this flow-chart you can see that they're trying to find out if an incident that caused harm was deliberate, grossly negligent, caused by wider system failings, etc. https://improvement.nhs.uk/documents/2490/NHS_0690_IC_A5_web...

If you have a look at NHS Resolution's page about learning from harm you can see that they're keen for healthcare professionals to 1) Say sorry, 2) explain in full what went wrong 3) Explain how that's going to be prevented in future. https://resolution.nhs.uk/services/safety-and-learning/

This expands upon a legal duty of HCPs and NHS Trusts in England: the Duty of Candour.

Here's the advice for doctors: https://www.gmc-uk.org/ethical-guidance/ethical-guidance-for...

And nurses: https://www.nmc.org.uk/standards/guidance/the-professional-d...

And other registered healthcare professionals: https://www.hcpc-uk.org/assets/documents/10003f72enc07-dutyo...

And organisations: https://www.cqc.org.uk/guidance-providers/regulations-enforc...

If a patient does decide to sue I think they can only recoup their actual losses. I think we don't have punitive damages in England.

LeonM(4021) 3 days ago [-]

My former roommate is a pilot. When I first met him, I noticed that he uses checklists for just about everything, even the most basic everyday tasks.

After some time, I decided to apply that same mentality to my own life. Both in private and work situations.

I get it now. Checklists reduce cognitive load tremendously well, even for basic tasks. As an example: I have a checklist for when I need to travel, it contains stuff like what to pack, asking someone to feed my cat, check windows are closed, dishwasher empty, heating turned down, etc. Before the checklist, I would always be worried I forgot something, now I can relax.

Also, checklists are a great way to improve processes. Basically a way to debug your life. For instance: I once forgot to empty the trash bin before a long trip, I added that to my checklist and haven't had a smelly surprise ever since ;)

happyweasel(10000) 3 days ago [-]

Checkslists force me to really have some kind of plan for achieving a certain goal - how I want to achieve something. Because I have to be able to write down the single steps.

Also checklists make it very easy to just get started.. just begin with the first task

And last but not least - I use checklists especially for things that I do NOT want to do at all (but I will have to do anyway). I am already annoyed (for whatever reason) by that task - so I want to minimize the amount of time I have to spend dealing with it. Therefore I use a checklist with the minimal amount of necessary steps to solve that problem or task, so I can get rid of it as fast as possible.

CloudNetworking(10000) 2 days ago [-]

If I remember anything from the 'Getting Things Done' training is precisely this:

Writing down tasks saves you a lot of brain cycles and also removes worries, as otherwise you have to be constantly 'refreshing cache' on pending tasks to not forget about them.

setquk(3966) 3 days ago [-]

I do this. Everyone seems to think I'm insane however. This is the same people who's lives regularly descend into chaos doing the same stupid things over and over again.

grigjd3(10000) 2 days ago [-]

Maintaining checklists in documentation for software design reduces mistakes dramatically. Writing step-by-step exacting build instructions for one of our core products reduced the annoying requests for help I got dramatically.

hinkley(4017) 3 days ago [-]

I spend a lot of time getting checklists together at work, mostly for other people, and then ruthlessly removing extraneous items from the list (usually by fixing unreliable things that are of the sort: step 5: Do X. step 6: double check that step 5 actually happened)

I get lots of credit for the former, but maybe one in five people see the latter as the bigger contribution.

We end up having to deal with things when we are tired. You have to make them so an idiot can do them, because some day you will be that idiot. All day exhausting meeting followed by a major emergency. Kid up half the night with a fever. New video game just came out. Bad dreams, whatever.

lordnacho(4103) 3 days ago [-]

I don't know how I'd live without checklists and Trello boards. Just running a family with a couple of kids, I don't get how people used to do this.

Each kid has their own schedule, their own list of items they need that day, their own homework.

When you shop, you need to know what to get for everyone.

And there's the paperwork, you have to decide what to buy for major purchases and pay for them all, and you have to sign up to various things like voting registries and local tax.

Add to that your work, where you have a bunch of tasks to do as well, various projects, bugs, meetings, and so on.

I lived through the pre-mobile, pre-everyone-had-a-computer era, and I don't get how people did anything. Paper diaries? Rolodexes?

dontbenebby(3995) 3 days ago [-]

In human computer interaction they often talk about the 'gulf of execution' - when the desired end state is known but it's unclear how to get there.

https://en.wikipedia.org/wiki/Gulf_of_execution

I've found that using checklists helps tremendously when working on medium to large projects (things that take more than a weekend to complete).

For example, if I want to learn a new technology, maybe I'll get a book on python, add the list of chapters in my projects tracking document, and strike them off one by one.

That sense of steady progress helps tremendously.

ekianjo(301) 3 days ago [-]

This is described in the book called 'the checklist manifesto'. Very good book by the way.

https://www.amazon.com/Checklist-Manifesto-How-Things-Right/...

shortandsweet(10000) 2 days ago [-]

What do you have on the travel check list? Everytime I travel I say to myself that I should make a list but always forget or put it off.

iscrewyou(4101) 3 days ago [-]

I have checklists for almost everything. Recently I've started worrying that I'm depending too much on them because I should be able to think up things I need to do without these checklists. As in I'm not engaging my brain as much. But your comment just made me realize that now I have the cognitive load to do things other than remembering tasks. I recently passed two licensing exams without being stressed out that a lot of my friends failed. And I think I may attribute my checklists habit to that.

(Shameless plug from a happy user: I rely heavily on the Things app for Mac and iOS)

SomeHacker44(10000) 3 days ago [-]

As a pilot, we use checklists in a solo flight differently than in a crew flight. I fly solo flights by "flows," and then use/review the checklist at the completion of a flow to confirm I did not miss any step. In general the flows are a right-to-left arrangement of things you do in a certain phase of flight. That way you scan over the things in order and do/verify everything in a single "flowing" motion.

e40(3801) 3 days ago [-]

Yep. What surprised me: certain things end up with surprisingly long checklists. My 'monthly' checklist at home has 30+ items on it, and it's not padded.

When I hit 50 I started making lists like a mofo, because I realized it would relieve my cognitive load. And it did, big time.

andersonvaz(10000) 3 days ago [-]

Why care if dishwasher is empty?

umvi(10000) 3 days ago [-]

Is there a good app for storing/sharing various checklists (travelling checklist, selling house checklist, etc.)?

ChuckMcM(654) 3 days ago [-]

I would likely forget something if I didn't have a travel checklist. Something I keep in my Evernote notebook called, wait for it, checklists :-)

They are especially handy when on multi-stop travel where I'm spending all my time worrying about the logistics of the travel and meetings. It has definitely saved me from losing a number of phone chargers and razors over the years.

amelius(867) 3 days ago [-]

I also have a checklist for trips. But I haven't been able to come up with other use-cases in everyday life so far.

tjoff(10000) 3 days ago [-]

I see the appeal and do it to some extent. However I haven't finalized on a system for it.

I've realized that there is quite a bit of overlap between some of my lists and have been thinking that I'd like it to be a modular system. A checklist consists of a any number of items and sublists so that one can quickly combine them.

For instance, travel might optionally include the sublist abroad and/or skiing or summer. A work-related trip might add another set of items etc.

I bet there are apps for this and I think I found a few when looking, but I'm afraid of the managing overhead and would like access to it on my phone and computers without cloud bloat. Maybe git + vim-wiki or something is good enough (would also work well enough on an android phone with termux).

calvinv(10000) 2 days ago [-]

You don't know what you can't remember

arethuza(3257) 3 days ago [-]

My own anti-anxiety strategy for thinking like making sure the cooker is off, windows are closed etc. is to take pictures with my phone of everything (of course I never look at the pics).

I suspect there might be an opportunity for a 'visual checklist' app that prompts you to take pictures of stuff....

NB I do use a paper checklist for remembering to take stuff when I go up mountains at the weekend - forgetting gloves when it is snowing is never a good idea.....

throwaway848483(10000) 3 days ago [-]

I get the point of checklist for critical process. But I think it's counterproductive to drone your life away following task lists full of feature creep.

First with a little training you can put mental reminders in your mind, trusting yourself that they will show up when needed. It also helps with keeping your brain and memory in good working condition, and force you to stay in mental clarity and not being so overworked and tired that you have to rely on external list.

Second it's not robust to rely 100% on a task list being completed, sometimes forgetting something means that it's not that important. It's more important to rely on situational awareness to know what's need to be done and in what priority. The logic behind is pick something from the hot mess and make the whole better.

Third we can automate and delegate more easily now, quite often if you need to use a checklist, a script would be even better.

ip26(10000) 3 days ago [-]

Good checklist design is not trivial, however. I use checklists professionally, and the list rapidly accumulates cruft for problems that have been solved by design. E.g., suppose one time someone forgot to sign your software release. 'Verify release has been signed' is added to the checklist. However a bright soul also integrates signing into the release build. Thus the problem is essentially solved once and for all, but the item will remain on the checklist in perpetuity, which adds overhead & decreases confidence in the checklist process.

I think they make sense for infrequently exercised routines of moderate complexity that are 100% execution. A complex but limited scope machine like a plane is really an ideal example. I guess preparing for a vacation could be another, although I get caught up because packing is totally different every time. I suppose I checklist myself when I rappel or go skiing ('Skins, skis, boots, ... working up the body')

fitzroy(4118) 3 days ago [-]

I've used a packing list for travel for a while now. My current method is to set everything up with checkboxes, uncheck them all beforehand and then recheck them as I pack or as I decide I don't need them for this trip.

I used to just group items by category (outerwear, electronics, etc), but now I've found it better to group items by where they're going to be packed (pockets, under-seat backpack vs carryon / checked bag).

Below that, I have short supplemental lists for things like camping, international, trips with swimming/beach, formal (wedding, etc), or trips longer than a week.

Also, a 'Before You Go' list that's stuff to remember when literally walking out the door (take out the trash, shut down home theater computer, etc) that isn't realistic to pack or do in advance. Still a work in progress, but it really helps free up mental energy.

callumprentice(3534) 3 days ago [-]

Can't tell you how happy it makes me to discover I'm not the only one who does this. My wife says it drives her nuts but I'm pretty certain she's also delighted by how we never forget anything before or during a trip.

We did something similar for example: left a tray of leftover food in the oven (turned off) just before a trip and came back to a pulsating surprise. I added 'Check the oven is off and empty' to the list of things to do 'Just before we leave' and it's never happened since.

That said, there are several hundred things on my generic version and it sometimes feels a bit daunting having to start afresh each time we travel but I think so far, it's been worth it.

I keep threatening to do a trip without it and see just how many things we forget, don't do and screw up. MAybe one day... :)

mannykannot(4043) 3 days ago [-]

With regard to the difference between pilots and surgeons in this matter, it has been noted that the latter are not personally at risk from not using checklists. I am not suggesting that surgeons deliberately or cavalierly put patients at risk, but risk to oneself has a way of concentrating the mind.

WordSkill(10000) 3 days ago [-]

I lived in Scotland for a decade or so. On the surface, the provision of healthcare seemed good, and the media constantly tells you it is, but it tended to fall apart once you needed it for anything serious.

I heard of the experiences from countless others and, sadly, encountered it myself. As the patient, even though you are paying for the service through high taxes, you are not seen as the customer. This leads to a certain sense, when you do need a medical service, that they are doing you a favor, that you are somehow the recipient of charity and should be grateful for what you get.

Decisions about what medicines or treatments are available are often political, with certain high-profile conditions sucking up scarce resources at the expense of others. The focus is very much on managing public opinion.

There are many situations in which cost considerations have a horrific impact on lives. For instance, if there is a medicine which can prevent you losing your sight, but it is expensive, you will be offered it only for your second eye after you have lost sight in your first - the reasoning being that it is only worth spending that much money to prevent total blindness, but sight in one eye is enough.

If you think that your high taxes mean that your healthcare needs are covered, think again.

There is also a deep-rooted coverup culture that circles the wagons around bad doctors and poor processes. In my case, a ridiculous misdiagnosis had a real impact on my life for over a year. The other healthcare professional only came clean about it after the lead doctor had retired.

Again, you are not seen as the customer, as the one paying all their wages, so, you should just shut up and be grateful for what you get.

I often laugh when I hear inexperienced American talk about how much better the health system is in the UK. Sure, health insurance is expensive, but the actual healthcare is leagues ahead of anything available via any sort of national health service. Being recognized as the customer, with real rights, is of pivotal importance in receiving the care you need, when you need it.

In fact, you often come across UK citizens with a rose-tinted view of the National Health Service, but such opinions tend to change rapidly once you actually need something more than an occasional General Practitioners appointment. The whole thing is a cruel joke.

acallaghan(10000) 3 days ago [-]

> Being recognized as the customer, with real rights, is of pivotal importance in receiving the care you need, when you need it.

Except if your poor though right? Isn't that really the case?

In the UK we don't tend to ignore kids broken bones if they have poor parents. No one goes bankrupt and ends up homeless for contracting an illness, or having an accident at work.

We also spend less on our taxes towards the NHS than Americans spend on their Medicare - and then you have to pay for private 'health care' insurance on top, including all of the 'co pays' and whatever. It's a system that's rigged against you. For the rich, by the rich, to make the rich richer.

Even if the top 1% of private healthcare is better in the USA, you're ignoring the 99% of healthcare that isn't.

Most Americans just can't see the simple fact - many many other countries are better at this than you are. This is a solved problem in many other developed countries.

Universal healthcare simply benefits everyone in society, and does so purley for the common good.

DanBC(149) 3 days ago [-]

> I often laugh when I hear inexperienced American talk about how much better the health system is in the UK. Sure, health insurance is expensive, but the actual healthcare is leagues ahead of anything available via any sort of national health service

Exactly the same private healthcare is available to you in Scotland, if you chose to pay for it.

cs02rm0(4090) 3 days ago [-]

The checklist seems to be here: https://www.who.int/patientsafety/topics/safe-surgery/checkl...

As a software engineer, I'd hope this was done in software, where it could be trivially filled in by multiple people, checked across the organisation, not go missing or get drinks spilled on, etc.

Knowing the NHS south of the border, I assume it's not.

fiftyacorn(10000) 3 days ago [-]

Papers probably easier - these NHS software projects always end up disasters

mmaunder(1384) 3 days ago [-]

Does this include the new(ish) timeout procedure? Docs and nurses will take a beat before surgery to say the patients name, what the procedure is and I think one or two other very basic things. This has been done for a few years now in the USA and I find it fascinating. So basic in such a sophisticated space and I have heard it has had a profound impact on patient safety.

Any surgical staff around to tell us more?

DanBC(149) 3 days ago [-]

I'm not surgical staff. The checklist does include this discussion phase. Here's the English version: https://improvement.nhs.uk/documents/450/SFHFT_-_Invasive_Pr...

KboPAacDA3(10000) 3 days ago [-]

The saga of how this checklist came into existence is detailed in Dr. Gawande's book The Checklist Manifesto.

bumby(10000) 3 days ago [-]

If I remember correctly, they had similar results (>30% improvements) with their pilot study in Michigan

anderspitman(1689) 3 days ago [-]

I've seen a huge personal impact adopting principles from David Allen's Getting Things Done methodology, which in some ways is simply a checklist management system. As others have mentioned, it's not just preventing me from forgetting to do things. Capturing my thoughts and intentions externally reduces stress and frees my mind to have more new thoughts. My creativity has skyrocketed since I started writing down every project/product idea I have, and adding to them over time. Currently using Trello (which works great for this), but eventually I'd like to switch to something open source or make my own system tailored to my needs.

indiandennis(10000) 3 days ago [-]

I used to use trello too, but I switched to notion, which is more free-form and let's you organize things how you want. Sounds like what you might be looking for.

intertextuality(10000) 3 days ago [-]

I work on software related to medical drilling. Nurses run through simulations (mostly a situation where something has gone wrong) and get graded.

It baffled me to learn that this is NOT the norm at hospitals. Due to the stress of a situation-gone-awry and inexperience, some horrific things can happen.[0][1] In some situations you may only have a few minutes to enact corrective procedures. In any case without checklists (and without experience from running routine simulations) it's very easy to make mistakes or forget what to do.

I thought the checklists themselves were standard but it appears not...? The more I learn about hospitals' operating practice the more wary I become. I have no idea why hospitals aren't like the aviation industry and have checklists and expiring certifications. (Or maybe I heard wrong and I'm just completely wrong here.)

[0]: https://www.telegraph.co.uk/news/2018/05/10/premature-baby-d...

[1]: https://www.nbcbayarea.com/news/local/Dental-Anesthesia-Unde...

> Mead said the principal risk is a patient's airway. He explained that a child's breathing tube can collapse without warning under sedation.

> "It happens instantaneously," he said. "You have maybe half a minute to make critical decisions about how you're going to manage that child's airway. You can't do that if you don't have somebody competent there helping you."

cwbrandsma(10000) 3 days ago [-]

I was part of a start-up developing checklist for high risk pregnancies. The checklists were developed by some of the best doctors in the country, who had already proved the worth of using checklists in their own practice. Even with all of that, we had unending push back from every single hospital we talked to. Eventually it killed the company because deals could not be closed. I still hear of hospitals saying they want checklists, and they keep saying that until they see one, and then they don't want it anymore.

Also, it wasn't a cost issue. The package was pretty cheap all told. The push back was in the 'system getting in the way'...which was kind of the point unfortunately.

carbocation(2323) 3 days ago [-]

Many procedures in hospitals are checklist-driven. Peter Pronovost had an early, high-impact publication on the topic of bloodstream infection prevention with checklists: https://www.nejm.org/doi/full/10.1056/NEJMoa061115

amitport(4119) 3 days ago [-]

In the aviation industry pilots rarely get sued personally. IMO, the issue is more about legal system than it is about the medical one.

flukus(3935) 2 days ago [-]

I was surprised too, I worked on software for Surgery checklists a decade ago and thought it was standard practice and that only our digitization of it was new, the effects of a checklist have been known for quite a while now. The software I worked on would track every instrument and screw taken into a theatre (using hand scanners), the doctor would have his list of steps for the procedure and the equipment necessary, standard emergency packages were available for the exceptions, etc. It's not just for the direct patients either, some disease can survive instrument sterilization so knowing who else they've been used on can be important.

ausbah(3910) 3 days ago [-]

can anyone comment about applying this 'checklist process' to software development? has anyone used it to help with debugging, testing, merging, etc.?

insertnickname(4107) 2 days ago [-]

Automated tests are a checklist for when you refactor code: 'The code is supposed to do A, B, and C. Does it still do those things?' If you practice TDD, the tests are just-in-time checklists for your implementation work.

The problem with relying on tests written after the implementation is that the checklist might be incomplete. If your checklist is incomplete, then you can't rely on it. Of course, even a highly disciplined practitioner of TDD cannot produce a test suite that is guaranteed to be complete and correct, but there is an enormous difference between a checklist that is adequate 50 % of the time and one that is adequate 95 % of the time.

Look for ways to avoid simply 'going through the motions'. Checklists (including TDD) are about avoiding making mistakes, not merely about noticing when you do make mistakes.

CGamesPlay(3951) 3 days ago [-]

I've used it in situations where writing automated tests would have been too difficult compared to running through a checklist each time. I made a web game based on a variant of Chess, and I would run through the checklist before I committed to verify I hadn't broken anything. https://gist.github.com/CGamesPlay/1b3e150c5448c51fd6ea71ea6...

projektfu(4104) 3 days ago [-]

One of the issues I've seen in implementing checklists is actually the urge of people to put extreme detail into the checklist and people not putting items in a reasonable order. If you look at the referenced checklist (https://apps.who.int/iris/bitstream/handle/10665/44186/97892...) it is a very approachable list that acknowledges good skill in the people involved. Note that the 'before patient leaves' checklist mentions the sponge count but the pre-incision checklist does not. That's because a sponge count is simply part of the job of an instrument nurse and something they always do when they open a pack of sponges.

It's also hard to insert checklists into established procedures. One thing you could do is attach them to important parts that are not allowed to be used until you read the checklist. For example, you could refuse to unwrap the main surgical pack until the pre-incision checklist is followed. Pharmacy could wrap the anesthetic induction drug in the first checklist. The surgeon could be responsible for signing the final checklist in order to get paid.

Keep checklists simple, use them every time. Put them in places where they have to be used.

specialist(4117) 3 days ago [-]

Wise cautionary note. Use the right tool for the job.

Back when we used to burn 'golden' CD-ROMs for releases: Our checklists were getting too unwieldy, we were still making mistakes.

So I started a Go/NoGo process, aka Roman Evaluation. Any one could stop the release for any reason. We'd fix the problem(s) identified, try again the next day.

(Of course, we fed each release's results back into process during the post mortems.)

Razengan(3967) 2 days ago [-]

Any recommendations for good apps for routine checklists (i.e. repeated daily)? (preferably cross-platform and without subscriptions.)

kristianp(490) 2 days ago [-]

Before checking in code:

* have the unit tests been built and run?

* have I reviewed all changes and removed commented out code, tidied up?

Edit: oh, you meant apps. I read your comment as asking for good items for checklists.

Ensorceled(10000) 2 days ago [-]

For 30 years, every place I've gone to, I've implemented release process and release day checklists because they didn't exist. I've experienced push back every time: 'We haven't needed these before.'. At all of these places, release issues were always 'bad luck'. All of my teams have acknowledged these were awesome and reduced release day stress.

I completely understand why this took effort to get implemented and why it seems obvious in retrospect.

camnora(4092) 2 days ago [-]

Do you have examples or a blog post that covers some of the checklist items? I find this interesting.

analog31(10000) 2 days ago [-]

Add to the checklist: Can the patient survive the trip home? Based on a true story.

docker_up(3290) 3 days ago [-]

Doesn't this indicate some sort of malpractice in Scotland? I would think that forgetting important procedures leading to death is the definition of malpractice.

DanBC(149) 3 days ago [-]

The English NHS was dealing with about one million patients every 36 hours. https://www.england.nhs.uk/ They were doing over 8 million surgeries per year when the checklist was introduced.

For 2016/2017 the summary of 'never events' is here: https://improvement.nhs.uk/documents/2347/Never_Events_1_Apr...

There were 189 wrong site surgeries, and 114 retained foreign object post surgery.

That's not good enough, but it's not terrible.

The month by month data is here: https://improvement.nhs.uk/resources/never-events-data/

I have a comment about the regulatory framework (in England) here: https://news.ycombinator.com/item?id=19683366

matthewmacleod(3079) 3 days ago [-]

I'm not really sure what your point is. Ultimately, any kind of negligent behaviour would be 'malpractice', but that's not something that can be completely eliminated. Mistakes always happen, regardless of care or attention paid; it seems like the focus should be to develop systems which can minimise their occurrence, minimise the severity of them if they do occur, and help to provide information that can reduce recurrence.

I find the approach of things like rail and air accident investigations to be really useful. The goal for the investigation of any accident should not be about apportioning blame, but understanding why accidents happen and how their occurrence can be effectively reduced or eliminated.

drumttocs8(3881) 3 days ago [-]

That's terrifying. Thousands and thousands of lives have been lost simply because we were forgetting to do the right things? And we created a process to fix this in... 2019?

DanBC(149) 3 days ago [-]

The process was introduced in 2008.

This news is that we've now run research across an entire country (Scotland) for the surgery carried out between 2000 to 2012 to find out how much harm has been prevented.

kasperni(4099) 3 days ago [-]

Here is a link to the actual checklist:

https://apps.who.int/iris/bitstream/handle/10665/44186/97892...

I'm a bit perplexed that something so simple, can reduce post surgical deaths by a third.

macspoofing(10000) 3 days ago [-]

I'm not. A clinician is a human, and as a human, they can be distracted, tired, hungry, or any number of things that can reduce their cognition leading them to miss something. A checklist reduces cognitive overhead and provides accountability (no way a clinician would remember if they did or didn't do a particular action days or weeks or months after the fact).

treis(10000) 3 days ago [-]

>I'm a bit perplexed that something so simple, can reduce post surgical deaths by a third.

Because it didn't. It attributes the entirety of a decade long decline to the check list, which is obviously nonsense. I'm not saying that checklists aren't good, but they're not miracle workers.

moftz(10000) 3 days ago [-]

A lot of the time, people ignore over small, simple things because they either assume someone else already took care of it or that it's such an obvious thing that how could anyone not notice. The checklist forces you to actually look up and see that the extra blood needed for the surgery didn't show up or that one of the surgeons present is actually supposed to be in another OR.

It's like help desk asking you if it's plugged in. You may think of yourself as a technical person and that the problem is more complex like the driver isn't loaded or the monitor isn't properly configured but the problem could just be that you forgot to plug the video cable in. Just having those basic checks occur before anything can go wrong makes it so that when things do go wrong, time doesn't need to be wasted on going over the simple stuff or even worse, forgetting about the simple stuff and going down the wrong route.

alexpotato(3777) 3 days ago [-]

Atul Gawande explains in great detail how simple checklists can have big impacts.

For example, why would you need everyone in the room to say their name and what they do?

Simple: everyone is wearing a mask and often surgical scrubs that are the same color. By having everyone say 'Hello, my name is Dr. Jones and I'm the anaesthesiologist' you can quickly determine who is the person to direct questions to about the patient being under.

It gets even crazier when you hear stories about doctors going into the wrong operating room (especially at bigger hospitals where there are several ORs). A simple 'State who you are and why you are here?' costs very little and helps avoid costly mistakes.

Another example: having a checklist allows a junior person (e.g. a nurse) to challenge more senior people when they make a mistake.

Example without checklist:

Nurse: Dr, I think you forgot to do X.

Dr: I know what I'm doing, don't question me.

Example WITH checklist:

Nurse: Dr, you missed step #4.

Dr: I'm sorry, you are correct. We all agreed that was a necessary step and I missed it.

As have others in sibling threads, highly recommend Gawande's Checklist Manifesto.

graeme(2966) 3 days ago [-]

Humans can hold about 5-7 things in working memory. You'll note there are 22 items on that list + the headings of when to do them.

Expertise and practice makes some of those things automatic, but not enough to reduce it all below 5-7. Further, in any surgery there will be other things going on that require cognitive attention, decreasing capacity for other things.

Everything you can do to reduce cognitive overhead makes a process smoother.

I teach the LSAT. One section logic games (officially, analytical reasoning) tests precisely this cognitive load. Students must work with 4-6 rules + what the situation calls for on question.

The rules are impossibly simple. But, in the heat of things, students just aren't capable of working with that many items unless they create a structure using diagrams. And even seemingly tiny efficiencies have an outside effect on speed and correctness.

Don't forget that these surgeons are often tired busy and stressed, three factors that worsen performance. Having a clear list that says 'do this now dummy!' massively helps keep you on track even when you're a wreck.

shereadsthenews(10000) 3 days ago [-]

In any profession most practitioners are bad at what they do. There's nothing magical about medicine that exempts that profession from this rule. If your mechanic can forget to refill the oil in your engine there's no reason to believe that your surgeon won't leave a knife in your abdomen. There will be by necessity bottom-tier doctors and the checklist helps bottom-tier doctors stop killing people.

michaelt(3942) 3 days ago [-]

And here is the checklist [1]

These seem like very common-sense things to check - Is this the right patient? Are the instruments sterile? Have we counted all the surgical equipment after the procedure?

[1] https://apps.who.int/iris/bitstream/handle/10665/44186/97892...

Obi_Juan_Kenobi(10000) 3 days ago [-]

The issue is in the margins.

Maybe a shift-change caused the surgical team to be different from normal, so people aren't as comfortable with each other. There's social/professional pressure to fit within a hierarchy, especially with new people. Maybe a few people lower on the totem pole think something might be off, but don't want to say anything lest they risk appearing to undermine the surgeon.

So, at first it appears that a whole room of people would need to independently make the same mistake. But that's not so; only a few critical people need to make the mistake, and with enough ambiguity in the process (easily caused by anything happening 'out of the ordinary'), it won't be corrected some percentage of the time. Even seemingly trivial things.

The FAA found this occurring in the cockpit, especially from the '3rd seat'. A pilot and first officer may be 'in the weeds' dealing with the immediate threat of a situation, whereas others have the benefit of distance to reflect on a situation and observe more clearly. They don't get tunnel vision, and are in a better place to diagnose a tricky problem. However, they may not feel empowered to speak up, or feel they don't have the information the pilots do. Aviation has, broadly, sought to correct this and encourage anyone to speak up. Recently, this happened during the flight before the Indonesia 737 crash where similar AoA/MCAS issues occurred, but a 3rd pilot helped to address the situation.

_carl_jung(10000) 3 days ago [-]

Maybe so. When was the last time you forgot to do something that was 'common-sense'? Forgot to pick up your coat after work? Left your wallet on your desk? Now imagine each of those mistakes was fatal for a patient.

api_or_ipa(4112) 3 days ago [-]

That's the sign of a good checklist. The idea is to free up mental processing to focus on harder issues. Let the doctor focus on the hard stuff and not spend excess time trying to remember the simple stuff.

specialist(4117) 3 days ago [-]

As a life long patient, while I appreciate the checklists, I think some of the repeated rechecking is a bit much.

falcolas(10000) 3 days ago [-]

It's always worth remembering that 'after' can be after 10+ hours of continuous high-stress concentration, so remembering lots of little things can be very challenging.

lordnacho(4103) 3 days ago [-]

This is pretty interesting.

There seems to be a whole other list referenced: 'Is the anaesthesia machine and medication check complete?'

Also the team are supposed to intro themselves along with their roles. I wonder if it often happens that someone is missing?

koala_man(10000) 3 days ago [-]

Thanks, I can't believe this wasn't in article. The BBC might need a checklist of their own.

danjc(10000) 3 days ago [-]

Does anyone else think that a nearly 1 in 200 chance of dying following surgery is very high? I'm not sure I'd want surgery for a non-life threatening condition if the chance of death is that high.

tgsovlerkhgsel(10000) 2 days ago [-]

I suspect your chances to die from surgery for a non-life threatening condition may be a lot lower than 1 in 200, while your chances of dying if you are 80 years old, on the brink of death, with multiple different conditions at the same time, and having difficult surgery.

wonder_er(3891) 3 days ago [-]

The potential benefit of having checklists is enormous, and people's lives are on the line.

Many people rightfully ask

> Why has this not been adopted everywhere _yesterday_?

A book I read a few years ago might have the answer.

_Catastrophic Care: How American Health Care Killed My Father—and How We Can Fix It _ [0]

Here's the description, to evaluate if you want to give it a read:

> In 2007, David Goldhill's father died from a series of infections acquired in a well-regarded New York hospital. The bill was for several hundred thousand dollars--and Medicare paid it.

> These circumstances left Goldhill angry and determined to understand how it was possible that world-class technology and well-trained personnel could result in such simple, inexcusable carelessness--and how a business that failed so miserably could be rewarded with full payment.

> Catastrophic Care is the eye-opening result.

> Goldhill explicates a health-care system that now costs nearly $2.5 trillion annually, bars many from treatment, provides inconsistent quality of care, offers negligible customer service, and in which an estimated 200,000 Americans die each year from errors. Above all, he exposes the fundamental fallacy of our entire system--that Medicare and insurance coverage make care cheaper and improve our health--and suggests a comprehensive new approach that could produce better results at more acceptable costs immediately by giving us, the patients, a real role in the process.

[0] https://www.goodreads.com/book/show/13642523-catastrophic-ca...

edit: formatting

nwah1(3884) 3 days ago [-]

If we had a medicare-for-all system, we could provide some economic discipline by having high deductible plans, even if those plans are completely covered by a government-funded HSA plan of equal value. As long as the HSA could be withdrawn for retirement if unspent.

We also need transparent pricing and reviews. Sites like ZocDoc are trying to fill the niche of something like Yelp but for doctors. There's lots of easy low hanging fruit here.

cheerlessbog(10000) 2 days ago [-]

Essentially they pay for procedures and not for outcomes. Imagine if we compensated plumbers that way.





Historical Discussions: Notre-Dame cathedral: Firefighters tackle blaze in Paris (April 15, 2019: 986 points)

(987) Notre-Dame cathedral: Firefighters tackle blaze in Paris

987 points 5 days ago by kragniz in 707th position

www.bbc.co.uk | Estimated reading time – 5 minutes | comments | anchor

Media playback is unsupported on your device

Media captionThere were gasps from the crowd at the moment Notre-Dame's spire fell

A major fire has engulfed the medieval cathedral of Notre-Dame in Paris, one of France's most famous landmarks.

The 850-year-old Gothic building's spire and roof have collapsed but the main structure, including the two bell towers, has been saved, officials say.

Firefighters are still working to contain the blaze as teams try to salvage the artwork stored inside.

President Emmanuel Macron called it a 'terrible tragedy'. The cause of the fire is not yet clear.

Officials say it could be linked to the renovation work that began after cracks appeared in the stone, sparking fears the structure could become unstable.

Paris prosecutor's office said it had opened an inquiry into 'accidental destruction by fire'. A firefighter was seriously injured while tackling the blaze.

Visibly emotional, Mr Macron said the 'worst had been avoided' and vowed to launch an international fundraising scheme to rebuild the cathedral.

How did the fire spread?

The fire began at around 18:30 (16:30 GMT) and quickly reached the roof of the cathedral, destroying its stained-glass windows and the wooden interior before toppling the spire.

Some 500 firefighters worked to prevent one of the bell towers from collapsing. More than four hours later, fire chief Jean-Claude Gallet said the main structure had been 'saved and preserved' from total destruction.

Sections of the cathedral were under scaffolding as part of the extensive renovations and 16 copper statues had been removed last week.

Deputy Paris Mayor Emmanuel Gregoire said the building had suffered 'colossal damages', and teams were working to save the cathedral's remaining artwork.

Media playback is unsupported on your device

Media captionThe fire department said a major operation was under way

Historian Camille Pascal told French broadcaster BFMTV that 'invaluable heritage' had been destroyed, adding: 'Happy and unfortunate events for centuries have been marked by the bells of Notre-Dame. We can be only horrified by what we see'.

How have people reacted?

Thousands of people gathered in the streets around the cathedral, observing the flames in silence. Some could be seen openly weeping, while others sang hymns or said prayers.

Several churches around Paris rang their bells in response to the blaze, which happened as Catholics celebrate Holy Week.

Because of the fire, Mr Macron cancelled a speech on TV in which he was due to address the street protests that have rocked France for months.

Visiting the scene, the president said the cathedral was a building 'for all French people', including those who had never been there.

'We'll rebuild Notre-Dame together', he said as he praised the 'extreme courage' and 'professionalism' of the firefighters.

A symbol of a country

Analysis by Henri Astier, BBC World Online

No other site represents France quite like Notre-Dame. Its main rival as a national symbol, the Eiffel Tower, is little more than a century old. Notre-Dame has stood tall above Paris since the 1200s.

It has given its name to one of the country's literary masterpieces. Victor Hugo's The Hunchback of Notre-Dame is known to the French simply as Notre-Dame de Paris.

The last time the cathedral suffered major damage was during the French Revolution. It survived two world wars largely unscathed.

Watching such an embodiment of the permanence of a nation burn and its spire collapse is profoundly shocking to any French person.

Facts about Notre-Dame

  • The church receives almost 13 million visitors each year, more than the Eiffel Tower
  • A Unesco World Heritage site, it was built in the 12th and 13th centuries
  • Several statues of the facade of the Catholic cathedral were removed for renovation
  • The roof, which has been destroyed by the blaze, was made mostly of wood
  • Read more about the treasures of the cathedral

What has been the international reaction?

The Vatican expressed 'shock and sadness,' adding that it was praying for the French fire services.

Germany's Chancellor Angela Merkel has offered her support to the people of France, calling Notre-Dame a 'symbol of French and European culture'.

UK Prime Minister Theresa May said in a tweet: 'My thoughts are with the people of France tonight and with the emergency services who are fighting the terrible blaze at Notre-Dame cathedral'.

Also on Twitter, US President Donald Trump said it was 'horrible to watch' the fire and suggested that 'flying water tankers' could be used to extinguish the blaze.

In an apparent response, the French Civil Security service said that was not an option as it might result in the collapse of the entire building.




All Comments: [-] | anchor

cwkoss(4117) 5 days ago [-]

Yikes. I wonder what was the nature of the work being performed. Perhaps they were working with volatile solvents?

CydeWeys(3953) 5 days ago [-]

Could be a spark from a welder, a worker on the roof taking a smoke break, an electrical fault, who knows. There's any number of possible ignition sources and there's a lot of wood in that structure.

marricks(4017) 5 days ago [-]

To preempt the conspiracy theorists showing up...

> Firefighters were rushing to try to contain a fire that has broken out at the cathedral, which police said began accidentally and was linked to building work at the site.

https://www.theguardian.com/world/2019/apr/15/notre-dame-fir...

ogeir(10000) 5 days ago [-]

How many of those doing the 'building work' were Arabs? Asking for a friend.

pastor_elm(10000) 5 days ago [-]

there were no workers on site (started at 6:30pm local). No way they know what the actual cause was.

panarky(158) 5 days ago [-]

Facts aren't an antidote to conspiracy thinking.

To the conspiracy minded, facts only prove how deep the conspiracy goes.

magduf(10000) 5 days ago [-]

No conspiracy theory needed. I'm not sure about contractors in Paris, but here in America a large percentage of them seem to be completely incompetent and do shoddy work, so it's no surprise to me that they'd cause a fire.

Remember the old saying: never ascribe to malice that which can be adequately explained by stupidity or incompetence.

pacoWebConsult(10000) 5 days ago [-]

A conspiracy theorist would expect the police to be covering up the fact that multiple churches in paris have been the targets of arsonists this year.

maccio92(10000) 5 days ago [-]

Sure looks like an accident in light of all these other attacks.. https://www.newsweek.com/spate-attacks-catholic-churches-fra...

tirrit(10000) 5 days ago [-]

This is the top hacker news..? :O

What is the relevance to HN, and doesn't the rest of the Internet media, cover this very good on their own?

A sad day for both HN and Notre Dame, in my book :/

Edit: [I won't remove my bad comment, since there are several responses relevant to it, but I would like to apologise, for my typing before thinking (and even before re-reading the policy's I, myself are commenting on) - and lastly for the non-relevant nature, and over dramatisation, of this comment. I see all that I did wrong, and will try to make better choices in the future ;) Sincerely, tirrit.]

nkkollaw(2759) 5 days ago [-]

Whatever man, someone posted it, it got upvoted.

onetimemanytime(1457) 5 days ago [-]

>>What is the relevance to HN....

The up-voters know the relevance. Might have something to do with 900 year old history, surviving every freaking thing possible during that time, only to be burned during the most peaceful and prosperous time ...maybe ever? Victor Hugo? A lot of history has happened there, like the https://en.wikipedia.org/wiki/Coronation_of_Napoleon_I or https://en.wikipedia.org/wiki/Notre-Dame_de_Paris#Events_in_...

matt4077(1176) 5 days ago [-]

Being a somewhat complete human being requires an appreciation of the major works in fields other than your own.

timothevs(10000) 5 days ago [-]

Today, we are all French. As a student of European History, I want to curl up and cry. I proposed to my beautiful wife of 11 years beneath the spire of Notre Dame. We fell in love walking along the bouquinistes. There is a terrible empty feeling in my heart this afternoon. It is like losing a part of myself this day.

Yes, I know the Notre Dame will be built again. But that might not happen till after I am long gone.

cphoover(3918) 5 days ago [-]

Sending good vibes your way.

MR4D(3964) 5 days ago [-]

You have a great story (albeit with a touch of sadness).

I would think that this type of event would bring the French together in a way like few other events could. I'd expect Gilets Jaunes movement to subside quickly.

Yes, today we are all French - and expect that we all want to see Our Lady rebuilt. Faster, better, stronger, and much more fire-retardant than in the past.

The Windsor Castle fire of 1992 was refurbished in 5 years, [0] and although a national treasure, it was not at the level of the Notre Dame. But it was rebuilt, and was even completed ahead of schedule.

The cost doesn't matter - it will probably be well over a billion. But you will see concerts, TV specials, and all sorts of fund raisers to rebuild her.

And in this you will see the best thing of all - the French (and even people like me who are only French on occasions like this) showing our love to rebuild her.

This is the message you should hold in your heart today - one of love and empathy, and dare I say, the grace of God that she was intended to foster.

[0] - https://en.wikipedia.org/wiki/1992_Windsor_Castle_fire

WalterBright(4022) 5 days ago [-]

> Today, we are all French.

Yes.

Creationer(10000) 5 days ago [-]

The cause of this fire is unclear, but other churches in France have been targets of arson and vandalism:

https://www.rt.com/news/456629-french-catholic-churches-atta...

treis(10000) 5 days ago [-]

>Yes, I know the Notre Dame will be built again. But that might not happen till after I am long gone.

The cathedral is made of stone. It will survive the fire and won't need to be rebuilt. 'Just' a new roof will be needed and remediation for the fire. Just in quotes because clearly that's still a massive undertaking which will take years and a whole lot of dollars.

The spire is clearly a great loss. As are the statues that were on the roof. Hopefully the stained glass makes it out ok, but that's probably optimistic. There's also the artifacts and art work in the interior that will be damaged. But the iconic bell towers remain. The statues on the facade are likely undamaged. The interior nave and apse will survive. After restoration it will still be essentially the same even if we lose some irreplaceable artifacts.

bhandziuk(10000) 5 days ago [-]

Can fire cause significant structural damage to a stone building like that?

davidcollantes(4108) 5 days ago [-]

There is lots of wood in that stone building. Lots.

snarkyturtle(3657) 5 days ago [-]

The melting point of marble is 800oC and if reports are true that it's a construction mistake and not something more drastic it stands a good chance of not collapsing.

vidanay(10000) 5 days ago [-]

Yes. If the fire gets deep into the walls it can severely weaken mortar or even crack stones.

CydeWeys(3953) 5 days ago [-]

Yes. Keep in mind that a lot of the wood is structural too (i.e. the roof). It's conceivable that this could cause collapse of some to all of the building.

tguedes(10000) 5 days ago [-]

Definitely. There is a great book called Pillars of the Earth where a cathedral in England burned down in ~1150.

pastor_elm(10000) 5 days ago [-]

doesn't look like they have any capacity for even fighting it. where are the planes loaded with water? firetrucks with ladders?

tomswartz07(4072) 5 days ago [-]

Water is heavy.

If you drop water on a structure like this, you end up having a collapsed building that is also on fire.

Thaxll(10000) 5 days ago [-]

https://en.wikipedia.org/wiki/Paris_Fire_Brigade

'it is the largest fire service in Europe and the third largest urban fire service in the world, after the Tokyo Fire Department and New York City Fire Department. Its motto is 'Save or Perish' (French 'Sauver ou périr').'

I'm sure they have enough capacity.

Also you don't just drop water from planes above cities... this makes no sense.

nsxwolf(3589) 5 days ago [-]

I've been watching different feeds and looking at photos and haven't seen any evidence of firefighting of any sort. What are they doing? Just letting it burn?

CydeWeys(3953) 5 days ago [-]

It's not like planes loaded with water are standing by at the municipal fire brigade ready to go at a moment's notice. They would take hours to deploy. You typically see them used in wilderness firefighting, where fires rage for many days if not many weeks.

JshWright(3519) 5 days ago [-]

What would you expect planes to do...?

This is a _really_ hard fire to fight. Their first priority is going to be ensuring everyone is safe, and likely setting up interior and exterior positions where they intend to stop the fire from spreading (the areas that are already involved are a total loss, let them go and focus on saving what can be saved).

johannes1234321(3984) 5 days ago [-]

They key point is 'fire control' You don't want to randomly throw water on a fire, which likely won't impact the fire in itself, but destroy the things not touched by the fire.

For getting a fire down you either have to prevent it from access to oxygen or prevent spreading.

A big fire like that can't be covered completely to be cut of from oxygen.

With fire control you can however try to cool down the areas close to inflammation to prevent further spreading. Save what can be saved, like the lower walls.

maxxxxx(3988) 5 days ago [-]

Trump gave the same advice. After being involved in projects where we had to put out some fires it annoys the hell out of me when people who never have thought about the problem give some 'advice' or say 'why don't you just'. Even rejecting it costs energy. Sometimes it's better to just shut up when you know nothing.

SirensOfTitan(4023) 5 days ago [-]

A presumably naive observer remarks from far away: why aren't the experts on scene doing what I think they should do from my couch?

The correct question here is: what don't I know about firefighting that would explain the actions of the firefighters here?

Not_a_pizza(10000) 5 days ago [-]

Probably a contribution by 'refugees', if I had to guess. Running people over wasn't enough.

sctb(2629) 5 days ago [-]

We've banned this account.

setquk(3966) 5 days ago [-]

That's terrible. I hope nobody is hurt

Kaveren(4111) 5 days ago [-]

Firefighters have said no deaths, which is good. No reports of injuries from a religious official.

nkkollaw(2759) 5 days ago [-]

This is horrible. I wonder what on the scaffold started the fire. Even a cigarette could have done this..?

franciscop(1835) 5 days ago [-]

I'd recommend following the events live on Twitter: https://twitter.com/search?q=%23notredame

Though the fire seems quite intense, not sure how much will be preserved.

LeoPanthera(2861) 5 days ago [-]

Social media doesn't seem like a good way to follow live events. It values being first, and being shocking, over being accurate.

As much as rolling news has its issues, live television news from a reputable network is better.

theclaw(10000) 5 days ago [-]

Why did this vanish from the front page?

glaurung_(10000) 4 days ago [-]

I noticed that too. Apparently hn's algorithm really likes recent posts. Switch your view from 'top' to 'best' and it jumps back up.

cc_nixon(10000) 5 days ago [-]

The saddest thing will be losing the stained glass windows. Those have been somehow preserved for centuries but are going to get severely damaged here.

justinator(4045) 5 days ago [-]

I mean the entire cathedral has already been severely damaged. These things are in a state of constant rebuilding - it's took many generations just to get built.

Notre Dame will be fine.

geff82(3628) 5 days ago [-]

I lived in Paris as a child and have often been at Notre Dame. I still feel heavily connected to France. Seeing this precious diamond burn is like having my own house burning. What a cultural tragedy.

geff82(3628) 5 days ago [-]

What kind of cultural war did I get in that by expressing my feelings towards this monument I get downvoted?

cdfky(10000) 5 days ago [-]

If it burns down perhaps a functional skyscraper should be built in it's place. It would be a testament to a better, modernist future that we could build if we disregard all prior historical biases people all over the world still have

return0(1886) 5 days ago [-]

especially religious biases

vonnik(1717) 5 days ago [-]

About 11 years ago, I climbed to the top of the spire of Notre Dame. It was not a place open to the public. (Although this is true, I find my own story hard to believe, so I will understand if there is skepticism.)

We were drunk and it was dark and late. We hopped the iron fence in the back, and scaled the southern wall that runs along the nave, where the flying buttresses are.

To get to the top, you climbed the walls and roof outside the building until you reached the base of the spire, and then you climbed inside the spire up several stories linked by rough wooden ladders, and then you had to get out and climb outside again, on a series of metal hooks, to get to the top where you could touch a metal globe and cross.

There was very little security (just one trap door inside the spire that you had to climb through, where you had to make sure breaking an electrical current didn't set off an alarm).

It was all very old, obviously, and old in a way of places where no one ever goes. Little used, and therefore neglected. Was the wiring on the trapdoor well insulated? I doubt it.

There was a small group of climbers in Paris who knew about this. Maybe a couple dozen people. One of them would occasionally lead a small group of friends: free climbing to the top of one segment of the wall, and then letting down a rope to help up those behind.

Notre Dame is at the center of Paris. There is a bronze marker in front of the church called 'kilometre zero,' from which all distances along French national routes are measured. From the top of the spire, the city fanned out like petals around a pistil. Paris was made to be seen from that one point, where no one ever went except a few climbers and pigeons, and maybe an adventurous priest.

The climber who took us up to near the top of the spire lay himself down on a rafter in its hollow interior, above the void, and fell asleep. Like I said, we were drunk, and it was all very dumb and dangerous.

When we came back down, about a foot before the last person touched the ground again, his rope broke. He picked it up, stared at it for a second, murmured 'C'est mort', and threw it away.

mongol(10000) 5 days ago [-]

Cherish that memory! It felt good to read.

bennettfeely(2196) 5 days ago [-]

Too early to say if it is related but a dozen or more Catholic Churches across France have been desecrated, vandalized, or set on fire since February of this year.

https://aleteia.org/2019/02/16/string-of-attacks-on-french-c...

sctb(2629) 5 days ago [-]

> Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.

https://news.ycombinator.com/newsguidelines.html

localhostdotdev(4055) 5 days ago [-]

aleteia seems to just be a catholic media:

https://en.wikipedia.org/wiki/Aleteia

seems like it's an almost official catholic source

matt4077(1176) 5 days ago [-]

Let's not peddle flimsy rumors, especially not those accusing certain groups (as made here using the linked article).

There is nothing to be lost in waiting for actual investigative results. And, fwiw, the early images clearly point to the fire breaking out in the roof, where construction scaffolds are clearly visible in all the images.

ritz_labringue(10000) 5 days ago [-]

This does not look like a reliable source of information at all.

berberous(4065) 5 days ago [-]

Very sad. While a different cathedral, it reminded me of Orson Welles' soliloquy in 'F for Fake' on Chartres (https://www.youtube.com/watch?v=ksmjh8LL2zA):

"And this has been standing here for centuries. The premier work of man perhaps in the whole Western world, and it's without a signature: Chartres. A celebration to God's glory and to the dignity of man. All that's left, most artists seem to feel these days, is man. Naked, poor, forked radish. There aren't any celebrations. Ours, the scientists keep telling us, is a universe which is disposable. You know, it might be just this one anonymous glory of all things, this rich stone forest, this epic chant, this gaiety, this grand, choiring shout of affirmation, which we choose when all our cities are dust, to stand intact, to mark where we have been, to testify to what we had it in us to accomplish.

Our works in stone, in paint, in print, are spared, some of them for a few decades or a millennium or two, but everything must finally fall in war or wear away into the ultimate and universal ash. The triumphs and the frauds, the treasures and the fakes. A fact of life. We're going to die. 'Be of good heart,' cry the dead artists out of the living past. Our songs will all be silenced — but what of it? Go on singing. Maybe a man's name doesn't matter all that much."

stcredzero(3109) 5 days ago [-]

In one Scientific American article, the top contender for the building which would remain longest if people suddenly disappeared would be the bottom pillars of the St. Louis Arch. Reinforced concrete sheathed in stainless steel. The central arch is just steel, so it would corrode and collapse faster. The remaining pillars would persist for many thousands of years just on their own.

css(3357) 5 days ago [-]
airstrike(3040) 5 days ago [-]

All I can say is 'fuck....'

I'm truly heartbroken

isostatic(3749) 5 days ago [-]

Many of these wooden structures are nearly 1,000 years old. The spire itself only dates from the 19th century.

Merrill(10000) 5 days ago [-]

The spire which collapsed was a reconstruction built during the restoration in the mid-19th Century.

The Cathedral of Notre Dame, Paris, Restored by Eugène-Emmanuel Viollet-le-Duc and Jean Baptiste Lassus -- http://www.victorianweb.org/art/architecture/vld/3.html

'The Commission on Historical Monuments approved most of Viollet-le-Duc's plans, but rejected his proposal to remove the choir built under Louis XIV. Viollet-le-Duc himself turned down a proposal to add two new spires atop the towers, arguing that such a monument 'would be remarkable but would not be Notre Dame de Paris'. Instead, he proposed to rebuild the original medieval spire and bell tower over the transept, which had been removed in 1786 because it was unstable in the wind.'

https://en.wikipedia.org/wiki/Eug%C3%A8ne_Viollet-le-Duc#Not...

sawjet(10000) 5 days ago [-]

Cathedrals like the Notre Dame were the moonshots of their time, only able to be built by immense societal consensus. It's unlikely with todays demographics shifts that we will ever witness a project as monumental as these were 1000 years ago. What a shame.

btlr(10000) 5 days ago [-]

Apollo program, LHC, Human Genome Project, ITER, and more...?

TwoNineA(10000) 5 days ago [-]

> immense societal consensus

They had a referendum asking the population to approve or not the construction? Or was it an autocratic entity (king and/or church) who decided?

unstatusthequo(4059) 5 days ago [-]

Bureaucracy and corruption might be more a reason something of this grandeur may not happen easily anymore.

leptoniscool(4007) 5 days ago [-]

Not necessarily.. Even a cell phone is the result of hundreds of thousands of people coordinating: hardware designers, manufacturers, software, shipping, retail, etc. If anything, everything around us is a monument to the hyper level of society/global consensus.

username223(3706) 5 days ago [-]

They were much greater: the people who designed them were dead long before they were completed. For better or worse, we can't build cathedrals now.

stcredzero(3109) 5 days ago [-]

Sagrada Familia

https://news.nationalgeographic.com/2015/11/151105-gaudi-sag...

Also, from other threads in these comments:

https://en.wikipedia.org/wiki/Dresden_Frauenkirche#Reconstru...

https://www.bbc.com/news/uk-england-york-north-yorkshire-281...

With increases in wealth and technology, we could see even more monumental projects implemented by only fractions of society. Then, there's also China. There are serious proposals for China to unilaterally dig a tunnel to Taiwan. I suspect they're also serious about long term colonization of the Moon and Mars.

SketchySeaBeast(10000) 5 days ago [-]

Because absolute authority no longer lies in the hands of a few people? What societal consensus was required? As long as the church, which was flush with cash, kept paying, people kept building.

bambax(3438) 5 days ago [-]

The Reims cathedral was heavily damaged during WWI and completely rebuilt. It loosely ressembles Notre-Dame de Paris.

https://en.wikipedia.org/wiki/Reims_Cathedral

maxxxxx(3988) 4 days ago [-]

'immense societal consensus.'

Wasn't it more that some kings or bishops got a big ego trip and the peasants had to do the work and got taxed?

Drup(4118) 5 days ago [-]

And yet the Sagrada Família is being slowly built in Barcelona, and financed mostly by tourism.

jammygit(10000) 5 days ago [-]

Depends as well on centralization of wealth. Maybe well see somebody like Bezos will build a pyramid in orbit to be his or her tomb

Godel_unicode(4074) 5 days ago [-]

You're aware that the literal moonshots were in living memory, right?

woodrowbarlow(10000) 5 days ago [-]

i think the first literal moonshot (apollo 11) is on par in terms of monumental projects built by immense social consensus, and that the future of space exploration could yet produce another example within our lifetime.

StephenAmar(10000) 5 days ago [-]

Quelle catastrophe

aurea(10000) 5 days ago [-]

Quelle

aaomidi(4016) 5 days ago [-]

Curious, for situations like this what happens if we airdrop a ton of small balls of dry ice? Like a hailstorm of dry ice?

marzell(10000) 5 days ago [-]

I'd imagine there are a few problems with approaching this as a solution. This is all just speculation on my part.

The weight itself being a problem first off, as it could cause further damage and even make it easier for the fire to spread.

The rate of sublimation would be a problem too, as the outgassing could actually act as an insulating layer, preventing the heat of the fire from actually increasing the release of CO2 at a useful rate to displace the oxygen that is enabling the fire.

Additionally, normally it is recommended to work with dry ice in a well-ventilated environment. CO2 is toxic, and also displaces oxygen, creating a significant risk of asphyxiation. With very large volumes such as this, you cannot effectively ventilate, so this could cause risks for those in the surrounding environment, and also makes it impossible for firemen to work in the area. They can't exactly run in with masks and have tanks of oxygen strapped to their backs.

NeoBasilisk(10000) 5 days ago [-]

imagine being the guy that accidentally burned down the Notre Dame Cathedral in 2019

reneberlin(10000) 5 days ago [-]

Because you booted 'crysis' in an electron-app running on a mac-mini from 2011.

danso(4) 5 days ago [-]

Interesting (but not surprising) to see this at the top of HN. I wonder if programmers feel even more existential dread about this compared to the average person — Notre Dame is a monument that seems eternal compared to the web apps we build.

edit: 'compared to the average person'

stcredzero(3109) 5 days ago [-]

There's a place both for the Cathedral and the Bazaar.

Consultant32452(10000) 5 days ago [-]

By comparison, I wrote some code in the early 2000s that is still the basis for a profitable SAAS product. It uses old frameworks that no one wants to use because it doesn't help their resume today. Multiple attempts at rewrites have failed, and they keep going back to what I wrote because it 'just works.' I think it's amazing that a thing I built lasted almost 20 years. Thousand+ year old buildings blow my mind.

beat(3711) 5 days ago [-]

Someday, when we win the final battle against Moore's Law, computers will stop evolving. Software will become as stable as medieval architecture. Maybe not in our lifetimes, but it'll happen.

I'm currently reading a science fiction novel that has a scene of someone who just woke up from nearly 200 years of hibernation getting attacked by all sorts of futuristic machinery, due to a 200 year old assassination virus that was still running. It could find him using the inevitable everywhere surveillance (retina scans everywhere), and then use whatever to attack - automated flying cars, a robot waiter, even a couch massage unit.

jackfrodo(10000) 5 days ago [-]

I think people who build things, whether it's software or buildings or anything, are more likely to deeply grapple with the fact that nothing lasts. Ozymandias, and so on.

interlocutor(4107) 5 days ago [-]

What is the building made of? Sand, gravel, stone, and cement are fairly inert. It must be made of some other flammable material.

evandev(10000) 5 days ago [-]

A lot of the structure is wood.

cududa(3306) 5 days ago [-]

...wood

nine_k(4085) 5 days ago [-]

«I know this doesn't help, but we have exquisite 3D laser maps of every detail of Notre Dame, thanks to the incredible work of @Vassar art historian Andrew Tallon. Prof Tallon passed away last November, but his work will be absolutely crucial»

https://twitter.com/grouchybagels/status/1117852841530368000...

TremendousJudge(10000) 5 days ago [-]

Your comment reminded me of this story from the codeless code[0]

>..."The initiate was only half-right," said Bawan to the emptiness. "True, the value lies not in carven oak, but neither does it lie in the shape of the carving; for both the real pillar and the virtual one may be lost, and the temple will be no poorer. But when wood first yields to metal, one more thing is made: and that is the sculptor."

This building was a major achievement that inspired millions of people through centuries. It is a landmark of humanity. A simple physical fire will not destroy its legacy in our culture.

[0] http://thecodelesscode.com/case/122

Datenstrom(4118) 5 days ago [-]

I hope someone did a 3D scan of the entire structure. Seems likely someone would have.

axlee(10000) 5 days ago [-]

It is one of the most photographed landmarks in the world (with...billions(?) of pictures), so even without a proper 3D scan we probably could remodel the whole thing to an astounding precision.

ghostbrainalpha(10000) 5 days ago [-]

That's a safe bet. And slightly related some of the coolest 3D Model kits ever, are made for the cathedral.

https://www.youtube.com/watch?v=-nwEFCBLrdk

Its going to feel a little bit different building one of these now.

cableshaft(4095) 5 days ago [-]

Assassin's Creed Unity? Sorta kinda?

neuronexmachina(10000) 5 days ago [-]

I don't know what the resolution was, but it looks like art historian Andrew Tallon has done laser scans of several dozen historic buildings, including Notre Dame: https://news.nationalgeographic.com/2015/06/150622-andrew-ta...

pj_mukh(3619) 5 days ago [-]

Here's a detailed model of its entrance archway. Other models this detailed exist for some of the other parts as well.

https://sketchfab.com/3d-models/portail-notre-dame-de-paris-...

agumonkey(929) 5 days ago [-]

time for all our AI/ML to show it's prowess by reconstructing as much details as possible from all the pictures and remains..

stevep98(10000) 5 days ago [-]

Even if not, it is extensively photographed.

The Photosynth TED talk uses Notre Dame as an example of reconstructing geometry from a random selection of photos.

https://www.ted.com/talks/blaise_aguera_y_arcas_demos_photos...

Skip to 3:45 or so.

kweks(4114) 5 days ago [-]

I had the privilege of undertaking the first (and now the last..) study on the spire of Notre Dame since 1933.

The restoration works that were under place are a result in part of our recommended actions.

The spire was incredible. It was one oak trunk, connected with a 'Scarf Joint', or 'Jupitre' in French (Bolt-of-lightning joint)

There were the names of the last guys to inspect it in the 1930s, engraved at the top. There was a french ww2 bullet embedded in the spire, presumably shot at a germany sniper who was in the spire...

Everything in the roof was antique wood. Anyone that went into the roof was paranoid of fire.

It's a very, very sad day.

As a celebration, I'm throwing up some photos that we'd never published from our study.

https://imgur.com/gallery/9k9I8Y0

qzw(10000) 5 days ago [-]

Thanks for sharing the photos. It's an incredible tragedy. I had the privilege of visiting Notre Dame twice, but I'm deeply saddened for all those who will not have the opportunity to see it in its former magnificent form, including my own young children. I only hope that some of it will remain and be a foundation for rebuilding.

carlob(4066) 5 days ago [-]

Just a point in clarification: when you say antique, that means mid 19th century. The original 13th century spire was removed in 1786 because it was falling apart.

krisrm(10000) 5 days ago [-]

This is awesome - thanks for sharing your story, and that photo. I'm sure that the detailed studies performed by you and others will be invaluable going forward - both with remembering and healing, and eventually perhaps, rebuilding.

adt2bt(4119) 5 days ago [-]

I cannot imagine how you feel right now. I've only been inside once two years ago, and I am devastated to see this structure burn.

Can you perhaps comment on what restoration work may have caused this?

Balgair(2928) 5 days ago [-]

The Bells of Notre Dame:

https://www.youtube.com/watch?v=VAzDXgxaq94

https://www.youtube.com/watch?v=W5wDX-pZLOs

It'll be decades, but if I'm very lucky enough to live very long, I may hear those peals myself.

lambdasquirrel(3933) 5 days ago [-]

Maybe California could donate some fire-resistant redwood for the rebuilding?

brundolf(3248) 5 days ago [-]

Is it possible that the stone parts of the structure will be more resilient than the (apparently) wooden roof, and might survive if the fire is put out quickly enough?

dmckeon(3766) 4 days ago [-]

With my sympathy for this tremendous loss, but could you comment on the accuracy of this report on the amount of lead metal in the spire (or is it the entire roof?)

> The 3-meter-tall statues are being sent to southwestern France for work that is part of a 6 million-euro ($6.8 million) renovation project on the cathedral spire and its 250 tons of lead.

https://www.sfgate.com/news/article/Cleaning-offers-rare-gli...

lucasverra(4114) 5 days ago [-]

safe !!

pkamb(4021) 5 days ago [-]

Have any pictures of the wood or joints or signatures that you mention?

GordonS(760) 5 days ago [-]

Did you feel any need to engrave your own names for the next inspectors?

JshWright(3519) 5 days ago [-]

For those asking about why there isn't visible water being sprayed on the fire... There's no point. Any firefighting efforts are focused on preventing the spread of the fire to other structures (potentially other parts of the same structure)

As a rule of thumb, the water flow necessary to extinguish a burning structure is the volume of the structure (in feet) divided by 100. The resulting number is (in rough numbers) the amount of water you need, in gallons per minute. For a fire this size, you're looking at tens of thousands of gallons of water per minute. It's just not possible.

yters(1966) 5 days ago [-]

Based on some of the other commentators, tens of thousands of gallons per minute sounds doable. One mentioned a single truck can do two. Get ten or more of those trucks (which I assume Paris has) and that seems like a decent amount of water.

Raphmedia(10000) 5 days ago [-]

In the news they said that firefighters are entering the building to save as much art as possible before spraying water.

(The news networks are pretty much as clueless to the situation as everyone else, so take with a grain of salt)

Edit: They are now spraying the stone structures.

chrisseaton(2974) 5 days ago [-]

> you're looking at tens of thousands of gallons of water per minute

That's only like six fire engines, looking up the stats. (But I don't know anything about fire fighting.)

vanadium(10000) 5 days ago [-]

CNN's now reporting that water cannons are being aimed at the central fire.

cma(3388) 5 days ago [-]

Flying water tankers could do it, but they must act fast

(Edit: All these armchair quarterbacks think they know better than the president's team of social media advisers and vetters, they have no idea what goes into that kind of operation and shouldn't question the president based on spur of the moment analysis. You don't think professionals would have already considered all these points before still going forward with the tanker recommendation at the presidential level through the official communication channel?)

DennisP(3378) 5 days ago [-]

I'm watching it on CBS News right now and there is in fact lots of visible water being sprayed on the fire.

IAmEveryone(3929) 5 days ago [-]

Here's a very close video showing water (pumped from the Seine) being sprayed on the roof.

https://twitter.com/shivmalik/status/1117864730453061634

anon491throw(10000) 5 days ago [-]

Short of a very precise aerial tanker mission (and how long would it take to spin that up or get lucky with an aircraft in training nearby, fill up and then vector it), no way to put it out. It might've been feasible as there are very few tall buildings or structures in Paris and it's in the middle of the river with nothing much on either side.

People also should know that simultaneous fires on multiple floors of multi-story buildings typically aren't possible to fight either, reducing possible efforts to containment and damage mitigation.

rdiddly(10000) 5 days ago [-]

The other thing those people might consider, and this includes our president, is that they aren't firefighters, have no idea what to do, and surely live in a place unequal to Paris in any parameter upon which you'd care to compare. IT'S PARIS. They're in Paris. And to paraphrase Chevy Chase: You're not. They know what they're doing. Imagine you're the Paris fire chief. That's like being the NYC fire chief. Such a person has a distaste for explaining to superiors why they mismanaged a firefighting operation, and has a lifetime of experience, and hence, tends to manage it properly.

'Must act quickly!' Ya think? Hopefully the fire department in Paris, France is listening for gems like that from out here in the boonies / the ghetto hemisphere. How many medieval churches are there on our whole continent? Call us if you want a church bombed.

(edited slightly in response to child comments)

mschaef(10000) 5 days ago [-]

Any thoughts on why there isn't more evident action from building fire suppression systems? At least in smaller systems, I've seen sprinklers that work externally to the building to protect tall wooden spires, etc.

vjeux(1903) 5 days ago [-]

A twitter thread with technical information (in French, would someone be able to translate?) with why it's not happening: https://twitter.com/FranckMee/status/1117862376047382528

Wellshit(10000) 5 days ago [-]

If only there was a river nearby...

Faint(10000) 5 days ago [-]

> For those asking about why there isn't visible water being sprayed on the fire... There's no point. Any firefighting efforts are focused on preventing the spread of the fire to other structures (potentially other parts of the same structure)

> As a rule of thumb, the water flow necessary to extinguish a burning structure is the volume of the structure (in feet) divided by 100. The resulting number is (in rough numbers) the amount of water you need, in gallons per minute. For a fire this size, you're looking at tens of thousands of gallons of water per minute. It's just not possible.

I wonder if the fire could be covered with a huge tarpaulin or alike to try to suppress it.

nine_k(4085) 5 days ago [-]

I suppose that if it were possible, it would be very heavy, and could break the very structures it tried to preserve.

I wonder if injecting large amounts of CO2 or N2 inside the structure would be feasible.

Water-soluble foam, which is much lighter weight and easier to produce in large volumes, is also known to be used to contain large fires, especially of oil / fuel.

ineedasername(4092) 5 days ago [-]

This will probably be an unpopular opinion, so let me first start by saying I certainly mourn the loss of history, the loss of a beautiful structure, and that which was inside.

However, as I watched it burn, the thought uppermost to my mind was the nearly 200 years it took to build, and therefore 200 years worth of sucking money and resources from the local (and probably some non-local) populace. And this during a period of history when many lived in abject poverty. How much more might have been added to society if those resources had been used to better effect?

It reminds me of how, even in modern times, the Catholic church has done much the same. My father grew up in the north eastern US in a poor urban area. His family was dirt poor and struggled to get 3 basic meals a day. Yet the local parish pressured, guilted, shamed, and instilled fear in the parishioners to get them to give 10% of their income to the church.

So yes, I mourn Notre Dame, but I can't separate it in my head from the financial predations of the church on its followers.

ryacko(10000) 5 days ago [-]

You must construct additional pylons.

svieira(3833) 4 days ago [-]

I'm sorry that your father's family grew up on the border between needing charity and giving it (that's always hard). But have you considered that maybe they chose to give 10% because they had discerned they could make do with the 90% that was left and they wanted to bestow charity like the widow with two mites? Simple is not foolish, though it may appear so. Your dad may not have understood what their thought process was, though he could see what appeared to be coercive external pressures. (That which is coerced is not charity and if that was going on it should have been stopped.)

i_am_nomad(10000) 5 days ago [-]

Organized religion and specifically Christianity have provided moral and organizational structures that have enriched human existence, knowledge and expression, perhaps more than any other force besides free market capitalism. The Catholic Church isn't flawless, but its benefits to the human race far outweigh its costs.

I'm an unbending agnostic, but I sometimes feel like I should go to church anyway, just to contribute to the social capital that Christianity generates.

mrmuagi(10000) 5 days ago [-]

Tourism is the opposite of sucking money from the local populace? It's a cultural landmark that, iirc, draws in a million tourists per year.

I kind am torn about your reasoning, but I understand it myself. If science and technology was paraded XYZ years earlier, wouldn't things be so much better? Except it's not that easy. Dumping a ton of gold in a pre-medieval economy is worthless. Dumping the right ideas instead (renaissance, medicine, industrialisation) would have been priceless. Though, I wonder, what could have happened if instead people were taxed 10% they invested that in other means -- wouldn't it be nice if it had compounded over all those years...





Historical Discussions: I found two identical packs of Skittles among 468 packs (April 17, 2019: 803 points)

(807) I found two identical packs of Skittles among 468 packs

807 points 2 days ago by bookofjoe in 244th position

possiblywrong.wordpress.com | Estimated reading time – 9 minutes | comments | anchor

Introduction

This is a follow-up to a post from earlier this year discussing the likelihood of encountering two identical packs of Skittles, that is, two packs having exactly the same number of candies of each flavor. Under some reasonable assumptions, it was estimated that we should expect to have to inspect "only about 400-500 packs" on average until encountering a first duplicate.

So, on 12 January of this year, I started buying boxes of packs of Skittles. This past week, "only" 82 days, 13 boxes, 468 packs, and 27,740 individual Skittles later, I found the following identical 2.17-ounce packs:

Test procedure

I purchased all of the 2.17-ounce packs of Skittles for this experiment from Amazon in boxes of 36 packs each. From 12 January through 4 April, I worked my way through 13 boxes, for a total of 468 packs, at the approximate rate of six packs per day. This was enough to feel like I was making progress each day, but not enough to become annoying or risk clerical errors. For each six-pack recording session, I did the following:

  1. Take a pack from the box, open it, and empty and sort the contents onto a blank sheet of paper.
  2. Take a photo of the contents of the pack.
  3. Record, with pen and paper, the number of Skittles of each color in the pack (more on this later).
  4. Empty the Skittles into a bowl.
  5. Repeat steps 1-4; after six packs, save and review the photos, recording the color counts to file, verifying against the paper record from step 3, and checking for duplication of a previously recorded pack.

The photos captured all of the contents of each pack, including any small flakes and chips of flavored coating that were easy to disregard... but also larger "chunks" of misshapen paste that were often only partially coated or not at all, that required some criteria up front to determine whether or how to count. For this experiment, my threshold for counting a chunk was answering "Yes" to all three of (a) is it greater than half the size of a "normal" Skittle, (b) is it completely coated with a single clearly identifiable flavor color, and (c) is it not gross, that is, would I be willing to eat it? Any "No" answer resulted in recording that pack as containing "uncounted" material, such as the pack shown below.

Example of a Skittles pack recorded with 15 green candies and an "uncounted" chunk.

The entire data set is available here as well as on GitHub. The following figure shows the photos of all 468 packs (the originals are 1024×768 pixels each), with the found pair of identical packs circled in red.

All 468 packs of Skittles, arranged top to bottom, in columns left to right. Each pair of columns corresponds to a box of 36 packs. The two identical packs are circled in red.

But... why?

So, what's the point? Why bother with nearly three months of effort to collect this data? One easy answer is that I simply found it interesting. But I think a better answer is that this seemed like a great opportunity to demonstrate the predictive power of mathematics. A few months ago, we did some calculations on a cocktail napkin, so to speak, predicting that we should be able to find a pair of identical packs of Skittles with a reasonably– and perhaps surprisingly– small amount of effort. Actually seeing that effort through to the finish line can be a vivid demonstration for students of this predictive power of what might otherwise be viewed as "merely abstract" and not concretely useful mathematics.

(As an aside, I think the fact that this particular concrete application happens to be recreational, or even downright frivolous, is beside the point. For one thing, recreational mathematics is fun. But perhaps more importantly, there are useful, non-recreational, "real-world" applications of the same underlying mathematics. Cryptography is one such example application; this experiment is really just a birthday attack in slightly more complicated form.)

Assumptions and predictions

For completeness, let's review the approach discussed in the previous post for estimating the number of packs we need to inspect to find a duplicate. We assume that the color of each individual Skittle is independently and uniformly distributed among the possible flavors (strawberry, orange, lemon, green apple, and grape). We further assume that the total number of Skittles in a pack is independently distributed with density , where we guessed at based on similar past studies.

We use generating functions to compute the probability that two particular randomly selected packs of Skittles would be identical, where

Given this, a reasonable approximation of the expected number of packs we need to inspect until encountering a first duplicate is , or about 400-500 packs depending on our assumption for the pack size density .

Observations

The most common and controversial question asked about Skittles seems to be whether all five flavors are indeed uniformly distributed, or whether some flavors are more common than others. The following figure shows the distribution observed in this sample of 468 packs.

Average number of Skittles of each flavor in a pack. The assumed uniform average of 11.8547 Skittles of each color is shown by the black line.

Somewhat unfortunately, this data set potentially adds fuel to the frequent accusation that the yellow Skittles dominate. However, I leave it to an interested reader to consider and analyze whether this departure from uniformity is significant.

How accurate was our prior assumed distribution for the total number of Skittles in each pack? The following figure shows the observed distribution from this sample of 468 packs, with the mean of 59.2735 Skittles per pack shown in red.

Histogram of total number of Skittles in each pack. The mean of 59.2735 is shown in red.

Although our prior assumed average of 60 Skittles per pack was reasonable, there is strong evidence against our assumption of independence from one pack to the next, as shown in the following figure. The x-axis indicates the pack number from 1 to 468, and the y-axis indicates the number of Skittles in the pack, either total (in black) or of each individual color. The vertical grid lines show the grouping of 36 packs per box.

Number of Skittles per pack (total and of each color) vs. pack number.

The colored curves at bottom really just indicate the frequency and extent of outliers for the individual flavors; for example, we can see that every color appeared on at least 2 and at most 24 Skittles in every pack. The most interesting aspect of this figure, though, is the consecutive spikes in total number of Skittles shown by the black curve, with the minimum of 45 Skittles in pack #291 immediately followed by the maximum of 73 Skittles in pack #292. (See this past analysis of a single box of 36 packs that shows similar behavior.) This suggests that the dispenser that fills each pack targets an amortized rate of weight or perhaps volume, got jammed somehow resulting in an underfilled pack, and in getting "unjammed" overfilled the subsequent pack.

This is admittedly just speculation; note, for example, that the 36 packs in each box are relatively free to shift around, and I made only a modest effort to pull packs from each box in a consistent "top to bottom, front to back" order as I recorded them. So although each group of 36 packs in this data set definitely come from the same box, the order of packs within each group of 36 does not necessarily correspond to the order in which the packs were filled at the factory.

At any rate, if the objective of this experiment were to obtain a representative "truly random" sample of packs of Skittles, then the above behavior suggests that buying these 36-pack boxes in bulk is probably not recommended.

Stopping rule

Finally, one additional caveat: fortunately the primary objective of this experiment was not to obtain a "truly random" sample, but only to confirm the predicted "ease" with which we could find a pair of identical packs of Skittles. However, suppose that we did want to use this data set as a "truly random" sample... and further suppose that we could eliminate the practical imperfections suggested above, so that each pack was indeed a theoretically perfect, independent random sample.

Then even in this clean room thought experiment, we still have a problem: by stopping our sampling procedure upon encountering a duplicate, we have biased the distribution of possible resulting sample data sets! This can perhaps be most clearly seen with a simpler setup that allows an analytical solution: suppose that each pack contains just Skittles, and each individual Skittle is independently equally likely to be one of just possible colors, red or green. If we collect any fixed number of sample packs, then we should expect to observe an "all-red" pack with two red Skittles exactly 1/4 of the time. But if we instead collect sample packs until we observe a first duplicate, and then count the fraction that are all red, the expected value of this fraction is slightly less than 1/4 (181/768, to be exact). That is, by stopping with a duplicate, we are less likely to even get a chance to observe the more rare all-red (or all-green) packs.

It's an interesting problem to quantify the extent of this effect (which I suspect is vanishingly small) with actual packs of Skittles, where the numbers of candies are larger, and the probabilities of those "extreme" compositions such as all reds is so small as to be effectively zero.




All Comments: [-] | anchor

tentakull(10000) 2 days ago [-]

this is the most autistic post I've seen on hacker news

bookofjoe(244) 2 days ago [-]

I posted it. I'm a 70-year-old retired (38 years experience) neurosurgical anesthesiologist. This is the first time in my life anyone has ever alluded to the possibility that I might be on the spectrum. Better late than never!

metaphor(4021) 2 days ago [-]

> From 12 January through 4 April, I worked my way through 13 boxes, for a total of 468 packs, at the approximate rate of six packs per day.

Honestly thought I was going to read a 'just cause' blog on machine vision and process automation, e.g. 3 months to develop a functional prototype and train the system, 3 days to process 468 packs...and automated repeatability at the end of it all.

SlowRobotAhead(10000) 2 days ago [-]

I assume most people read the title and we're expecting machine vision. I definitely did within the first two seconds of reading.

personjerry(2576) 2 days ago [-]

No ones mentioned this, but it's unlikely skittles are 'randomly distributed'. Instead it's likely in the Skittles factory there's some system that attempts to reasonably distribute colours so no one bag is too skewed. So the whole premise is faulty.

jdreaver(10000) 2 days ago [-]

No one has mentioned it? Did you read the article? The article discusses exactly what you just said.

learnstats2(10000) 2 days ago [-]

I would bet the Skittles factory has no such system - the data appears to be highly consistent with what you would expect from mixing the skittles well and making entirely random bags.

dblohm7(3466) 2 days ago [-]

Don't care. Kill green apple and bring back lime!

thebouv(4085) 2 days ago [-]

This is the real truth here.

lanius(10000) 2 days ago [-]

What's the likelihood of getting a potato chip bag with only one chip?

always4getpass(10000) 2 days ago [-]

Zero because QA :)

justaaron(10000) 2 days ago [-]

that is a seriously skewed candy budget... do you have any teeth left?

pcurve(4078) 2 days ago [-]

as one gets older, cavity becomes less common. sigh

atdrummond(4081) 2 days ago [-]

OP didn't eat them, per the post.

joelrunyon(1691) 2 days ago [-]

Anyone do the math on this?

tlrobinson(355) 2 days ago [-]

It's interesting this person can program, but chose to manually count 27,740 Skittles instead of automate it in some way.

It feels like an almost-trivial computer vision problem.

peterburkimsher(3294) 2 days ago [-]

It's been done! A teenager built an M&M and Skittles sorting machine a few years ago. I expected this article to be based on that, but to my surprise, he counted them all manually.

https://hackaday.com/2017/02/06/mms-and-skittles-sorting-mac...

To the author's credit though, doing it manually probably made more sense in his situation. It would take a few months to build the auto-sorter, which would've had to be tweaked to handle the misshapen blobs. The collection of Skittles photos is impressive too, and hopefully will end up in a Maths classroom.

tayo42(10000) 2 days ago [-]

computer vision is trivial? when did that happen?and how was it made trivial?

tlrobinson(355) 2 days ago [-]

Since no one seemed believed it's almost trivial, I went ahead and implemented it:

https://imgur.com/t7ivg0V

It took about an hour of Googling OpenCV questions. Also, I've never actually used OpenCV before, and don't use Python regularly.

I haven't tested it against the full set of images. Even if it's not perfect you could at least use this to narrow down ones that are within some threshold, then manually verify they were tagged correctly.

I'll post the code shortly.

heavymark(3497) 2 days ago [-]

Hard to imagine that process would be trivial especially since as he noted in the article he had to have criteria on when to count a skittle such as if it were too small, appeared unedible and other such characteristics. But after completing the manual way would be interesting to running programs on it to see how close they are from the actual numbers.

mtw(3800) 2 days ago [-]

I hope he uses all the Skittles to decorate a wall or something :) otherwise it's a lot of wasted food

acct1771(10000) 2 days ago [-]

High fructose corn syrup is barely food.

pseudolus(62) 2 days ago [-]

Just out of curiosity, did you actually consume them?

m45t3r(4016) 2 days ago [-]

OP said in the comments of his blog that he didn't actually consumed them, instead he gave to relatives.

mikorym(10000) 2 days ago [-]

Cool thing to do.

This is similar tot the birthday problem: How many people do you need to have a probability of > x% that two are born on the same day?

It's something like 50 people to have a probability of > 80%. You can conduct this experiment at a school, using each class as a sample experiment to see if there are two pupils born on the same day.

possiblywrong(10000) 2 days ago [-]

Author of the article here-- right! This was the key 'real world' motivation for this experiment as an attempt at a pedagogical tool; from the article:

> As an aside, I think the fact that this particular concrete application happens to be recreational, or even downright frivolous, is beside the point. For one thing, recreational mathematics is fun. But perhaps more importantly, there are useful, non-recreational, "real-world" applications of the same underlying mathematics. Cryptography is one such example application; this experiment is really just a birthday attack in slightly more complicated form.

bookofjoe(244) 2 days ago [-]

'In a room of just 23 people there's a 50-50 chance of at least two people having the same birthday. In a room of 75 there's a 99.9% chance of at least two people matching.' https://betterexplained.com/articles/understanding-the-birth...

bakul(10000) 2 days ago [-]

This guy missed an excuse to build a Lego sorting machine....

pixl97(10000) 2 days ago [-]

Sorting machine?, this is hacker news, a visual AI that would count each color is the rage these days.

lordnacho(4103) 2 days ago [-]

This is brilliant. Takes a theory, iid colors, it turns out to be wrong, but still gets a hell of a lot of conclusions out of it.

Since this is a nerd site, the next step is to use this:

https://www.planet-gbc.com/

Build a Lego contraption to push some skittles through a sensor that counts them.

cshimmin(4116) 1 day ago [-]

I don't think the author presented any evidence that invalidates the assumption of IID colors. They show a histogram of the different counts of each color, which seems more or less consistent with a uniform distribution. There are some fluctuations but they're not surprising; the Poisson error bars would be roughly +/- 0.16 in that figure. With error bars that large, it would be surprising if the data were in more exact agreement with the flat line; it's actually related to the same question that the author is examining ('what are the odds to observe all 5 colors at exactly their expected rate of 1/5 within measurement error?').

They do speculate that the number of candies per pack is not IID, i.e., that there are (anti)correlations from one pack to the next. But without knowing more about the packing process, and presumably also having some lot/serial number information for each pack, it would be pretty hard to establish this.

z3t4(3839) 2 days ago [-]

I like practical statistics, for example when betting red/black in a casino, what is the chance that you would lose 10 times in a row ? Just keep doubling they say, but eventually you will get a bad streak and wont have enough money or you'll hit the limit.

e1g(4112) 2 days ago [-]

If it landed on red 10 times in a row so far, the odds of landing another red on the next round are almost 50%. Gambler's fallacy is so much fun to observe at the tables though :)

sharkweek(1051) 2 days ago [-]

I remember an experiment in high school math of some variety.

The teacher had us all 'guess' what twenty coin flips would look like. The longest any streak any student wrote on their paper was maybe 4-ish in a row or something.

He then had us all actually flip the coins and record the results. One student had like 11 in a row, most hit a streak of somewhere between 5-8 of the same result.

Lesson learned, we're really bad at guessing 50/50 streaks.

mindfulplay(10000) 2 days ago [-]

What a time to be alive!

HNLurker2(3889) 2 days ago [-]

This is 2 minutes paper. And I am your host.

dekhn(10000) 2 days ago [-]

I'm so glad to see there are people who think like me. I have spent thousands of hours doing various projects like this.

The only difference is, I normally spend my time automating the process with computer vision (a skittle sorter/counter wouldn't be that hard to build; opening the bag is harder than identifying colors). And then I never really finish the project.

joshvm(4115) 2 days ago [-]

Yup, dump on a piece of paper and segment by contours, bright colours so super easy. Though you'd lose the nice skittle graphs which I think is half the fun of the process.

yurishimo(10000) 1 day ago [-]

In high school, our second year engineering classes required us to build a skittle sorter out of lego and some sensors/software.

The only requirement was a hopper for the teacher to dump the skittles into at the beginning. She would have a different amount for each group that was pre-counted and our numbers had to match hers at the end within a certain % of error.

It was a month+ long project and really fun to see each team's solution in how they moved/sorted/stored the final product. Some teams used long conveyor belts, others used short stacked belts, etc. It was all really interesting and I thoroughly enjoyed it!

LeifCarrotson(10000) 2 days ago [-]

As someone who (also?) builds automated industrial machinery for a living (and agrees that opening the bag would be harder than picking colors), the challenge would be in improving reliability from a typical 97% to the nigh-impossible task of getting 27,000 operations correct in a row. What will you do with small pieces of skittle shell, or broken skittles that lack a shell, or 2 skittles that are bonded together, or shreds of red bag that got into the hopper? What will you do if the dye is running low, or is contaminated, and the color fades or shifts between bags? The original article had some rather difficult to automate, subjective criteria for handling these questions.

One thing to remember in the panic over automation killing off jobs is that handling the normal case is maybe 20% of the work. Handling routine errors is 80% of the work. Handling the errors that crop up once in a thousand parts - or dozens of times in this small sample - is why running lights-out is so difficult, and why we'll always need human operators and maintenance staff who can diagnose and correct unexpected problems.

That said, I'm at least 97% confident that OP, in manually counting and marking down these 27,000 items, mis-counted or mis-marked at least once. So I'm not feeling too bad about my pessimism regarding the automated solution. Even if it can't identify if a given sample is 'gross' or not, it will add 1 to a counter reliably!

Also, one test of 468 bags that shows one result does not prove anything about the average number required. You'd need to run this test many times for that, and for that you'd need automation!

__m(4111) 2 days ago [-]

Well opening the packs is probably not that much of an effort compared to sorting/counting 27,000 skittles. Instead of 3 month you could break it down to one or two days

daveFNbuck(10000) 2 days ago [-]

If you have to put the bag in the machine, it's not much more effort to tear it open. If you do that over a large funnel, that should work to get all the skittles into the counter most of the time.

I was thinking that time could be saved by not sorting the skittles so nicely. Just take a picture of the skittles from each bag spread out randomly. You should be able to figure out the color distribution just by counting pixels within each color range.

towndrunk(3991) 2 days ago [-]

So if I'm a statistical stupid what books do you recommend for the beginner?

RobertDeNiro(10000) 2 days ago [-]

Andrew Gelman has some pretty good books/lectures, but it might be too advanced.

dmitryminkovsky(4026) 2 days ago [-]

Stats is the science of sciences in my opinion, and anyone who brings it to life like this is awesome.

robertAngst(3920) 2 days ago [-]

Have you seen Efficiency Is Everything's data on food?

https://efficiencyiseverything.com/food/

city41(3077) 2 days ago [-]

I wonder if it would have been worth it to build a device to scan the skittles for you.

edit: would love an explanation for the downvotes

phjesusthatguy3(10000) 2 days ago [-]

I didn't downvote you but he did have a threshold for 'this misshapen lump isn't a skittle'

omarchowdhury(2428) 2 days ago [-]

A device seems like overkill. Write code to apply machine vision to this grid: https://possiblywrong.files.wordpress.com/2019/04/skittles_a...

eriktrautman(3635) 2 days ago [-]

The experiment is great hands on math but I would have enjoyed a discussion of variance versus expected value and the difference between short and long term averages... it's too easy to infer that everything is great because he was lucky enough to get within the target range but the likelihood of that occurring is not actually that high and was only implied by the shape of the Monte Carlo distribution in his previous post. When such experimental results are this conveniently "accurate", amateurs in the audience may take away the kinds of wrong inferences which create "it's a hot day so must be global warming" type of logical inaccuracies.

possiblywrong(10000) 2 days ago [-]

Author of the article here; this is a great point. This experiment initially stemmed from a nice analytical solution to the problem of computing the expected value (via generating functions as described in the post). Computing other moments, let alone the entire distribution, required some Monte Carlo simulation, as shown at the end of the first article (https://possiblywrong.wordpress.com/2019/01/09/identical-pac...) before I started the experiment.

And even this histogram assumes a distribution of total number of Skittles per pack (that varies) that I had to guess at beforehand. In hindsight, the final sample distribution suggests that I probably initially overestimated the true variance, and thus also overestimated the expected number of packs I would need to inspect. In other words, this experiment arguably took longer than 'average.'

So you're right-- this experiment could have extended into 700 packs, 800 packs... and still have been consistent with the assumed model, but I would have simply been in an unfortunate 90-th percentile possible universe where it took much longer than 'average.'

jordn(1389) 2 days ago [-]

Jesus Christ, apple and grape?!? Poor Americans... in the UK that's Lime and Blackcurrant.

kosievdmerwe(10000) 2 days ago [-]

Why poor Americans? It's purely a matter of taste.

Apple is culturally significant and Blackcurrants are banned in the US (due to then carrying a disease that would devastate one of the forests). So one is more important than lime and the other is totally unfamiliar.

maffydub(10000) 2 days ago [-]

Maybe I just have a really bad sense of taste, but I thought they were just 'green' and 'purple' (also in the UK).

fouc(3970) 2 days ago [-]

>So, what's the point? Why bother with nearly three months of effort to collect this data? One easy answer is that I simply found it interesting. But I think a better answer is that this seemed like a great opportunity to demonstrate the predictive power of mathematics. A few months ago, we did some calculations on a cocktail napkin, so to speak, predicting that we should be able to find a pair of identical packs of Skittles with a reasonably– and perhaps surprisingly– small amount of effort. Actually seeing that effort through to the finish line can be a vivid demonstration for students of this predictive power of what might otherwise be viewed as "merely abstract" and not concretely useful mathematics.

waffleguy(10000) 2 days ago [-]

Yo mama raised a fool?

Johnny555(4071) 2 days ago [-]

I'd think that he could have also proven this with a lot less work by writing a simulation.

robertAngst(3920) 2 days ago [-]

>Actually seeing that effort through to the finish line can be a vivid demonstration for students of this predictive power of what might otherwise be viewed as "merely abstract" and not concretely useful mathematics.

This is how I feel about engineering.

If you design something, prove it out with logic and math. It will work.

Or you will learn where you were wrong.

jv22222(759) 2 days ago [-]

It's really hard for the human brain to comprehend what these type of long odds mean practically speaking. This is a great project that illustrates it rather well.

indigodaddy(1692) 2 days ago [-]

Can I ask the point of quoting a snippet of the article without comment?

koliber(4100) 2 days ago [-]

Why climb Everest? Because it was there!

Pursue your passions, whether it is climbing a huge mountain or finding identical packs of skittles. It is what makes the world great and interesting.

muzani(3596) 2 days ago [-]

This is a big part why I like to play games of chance like poker and football simulators. It's a way of testing out your math-predictive skills.





Historical Discussions: 1060-hour image of the Large Magellanic Cloud captured by amateur astronomers (April 15, 2019: 749 points)

(750) 1060-hour image of the Large Magellanic Cloud captured by amateur astronomers

750 points 5 days ago by dmitrybrant in 3385th position

astrospace-page.blogspot.com | Estimated reading time – 3 minutes | comments | anchor

1060 is the number of hours needed to capture this highly-resolved image (204 Megapixels) of the Large Magellanic Cloud. It might be the world's longest exposure image within the amateur astronomers community.
In astrophotography, the amount of time you spend imaging a celestial object is inherently fundamental. The longer your camera's shutter is open, the more light you get, so that the darkest regions of the sky start to get clearer. Usually, amateur astronomers are familiar with very long integration times, such as few minutes or even few tens of hours. However, reaching a total amount of several hundred hours increases the complexity of image processing and therefore remains quite rare... though, five keen amateur astrophotographers challenged themselves and decided to capture a picture of 1060 hours of total exposure time, which can be considered as a world record (professional astronomy excluded).

This image is not only a technical accomplishment, but brings also a scientific interest to one of the most amazing deep sky object of the Southern sky : The Large Magellanic Cloud (LMC).

★ A Team of amateur astronomers behind this feat

The image is a mosaic made of 16 smaller fields of view, which, once stitched together form a high-resolution image of 204 Million of pixels! As of matter of fact, this is not the work of a single person but of a team of five french amateur astronomers called 'Ciel Austral': Jean Claude CANONNE, Philippe BERNHARD, Didier CHAPLAIN, Nicolas OUTTERS et Laurent BOURGON.

'Ciel Austral' owns a remotely-controlled observatory located in the most prestigious skies of the planet, in Chile, and more precisely at the El Sauce Observatory (Coquimbo Region). A 160-mm APO-refractor telescope and a Moravian CCD were used to obtain this wonderful field. The datasets were taken over several months, ranging from 2018 and 2019. The heavy files handled represent 620 GB and needed few hundreds of hours to get out of the image processing step! Once stacked together, they make up the stunning figure of 1060 hours of exposure. If you are more curious, we invite you having a look at their official website here.

You certainly noticed the color-rendering of this image is quite unusual. Indeed, astrophotographers used a couple of special filters which transmit narrow parts -lines- of the visible spectrum : the Hydrogen Alpha line at 656 nm, the Sulfur line at 672 nm and the Oxygen III spectral line at 500 nm. These kind of filters enable to emphasize chemical components located in high-density gas regions like nebulae, what standard RGB imaging can not perform.

Settled in Chile since 2017, the Ciel Austral Observatory gives to this 5-member team a way to expanding their knowledge and skills in astronomical imagery in order to fulfill its most ambitious projects. So, one should stay tuned for more of their upcoming fantastic images.

★ The Large Magellanic Cloud

This image shows us a unique view of the most famous night-sky object for Southern-Hemisphere astronomy. The LMC is actually a satellite neighbor of our Milky Way galaxy pretty close to us, at 50 kilo-parsecs distance (163k light-years). Scientists estimate it will do a full orbit around us in only 1.5 Billion years...

The Large Magellanic Cloud belongs to the Local Group - a list of about 50 galaxies close to each other, including our own.

If you have the opportunity to spend a night under the Southern Skies, this naked-eye-visible object will surprise you by its wide angular size and its strong brightness: LMC covers a slice of the sky which can contain 20 moon diameters, shining at a 0.9 magnitude!




All Comments: [-] | anchor

labster(3817) 5 days ago [-]

The glowing up arrow this site displays is makes reading on mobile really annoying. Why distract from the content with a UI pulsar?

chairmanwow(3941) 5 days ago [-]

I came here to say this. That one feature makes their site unreadable for me on mobile.

kristopolous(3289) 5 days ago [-]

Maybe the designer believes the feature of scrolling to the banner image at the top is absolutely essential and should be done frequently! That's why they made it red and blink, it's very important!

PMan74(10000) 5 days ago [-]

Does anybody know how expensive the setup described is?

- Remotely-controlled observatory at the El Sauce Observatory in Chile

- A 160-mm APO-refractor telescope and a Moravian CCD

- Presumably hefty image processing requirements

- etc.

I get that these guys are amateurs in that they are not being paid for this but presumably this costs some serious money? Or are the components they use in reach of a well to do hobbyist these days (all relative I know)?

coffexx(10000) 5 days ago [-]

All in USD: Scope 13k Camera 7k Mount 9k

Thats just the big ticket stuff. Theyll have a guidescope, colour filters, laptop (i assume), all sorts of paraphernalia supporting the effort.

Given that its remote controlled and in an observatory in Chile I suspect that adds another order of magnitude to the cost. But I'm unsure specifically how much, or if they're renting scope time.

You can buy much cheaper equipment and still do admirably, this set up is really quite extreme for a hobbyist.

Tepix(4014) 5 days ago [-]

Here is a writeup with pictures of a remote controlled observatory located in southern France (ROSA-REMOTE): http://lievenpersoons.com/astrophotography/observatory.html

Sounds like it took several trips and weeks to get it right.

Spinosaurus(10000) 5 days ago [-]

What is the very bright object with a bluish hue in the top left?

anonytrary(3971) 5 days ago [-]

Also: Does anyone here know what star that is? Is it one of the brightest stars in the night sky, or is it just super bright relative to everything else in this picture?

pixl97(10000) 5 days ago [-]

Most likely a star between us and the LMC.

anm89(3783) 5 days ago [-]

' Indeed, astrophotographers used a couple of special filters which transmit narrow parts -lines- of the visible spectrum : the Hydrogen Alpha line at 656 nm, the Sulfur line at 672 nm and the Oxygen III spectral line at 500 nm'

Can anyone comment on how much the images produced by these filters differ from what the human eye would see if somehow it was able to look at these objects. Are they also taking in information from the non visible spectrum and coloring it or is this all just a focusing of a light that real humans would have been able to perceive?

I know they mentioned using different filters to achieve the two different images but was

_ph_(10000) 5 days ago [-]

There are several parts to the answer of your question.

First of all, emission nebulas are not very bright, so no telescope can give a picture as bright as a long-time exposure does. If you can see a nebula through a telescope at all, it will be very faint. Which triggers another effect in your eye: the cells for color reception are not very sensitive. Like with general night vision, you will see nebulas usually only with your light-sensitive receptors, which don't see colors. So it will appear in a grey-greenish color.

High quality pictures of nebulas are taken at very specific wavelengths, of common emission frequencies, you listed them. Even at high brightness, they wouldn't directly convert into a good color picture, as 500nm is turquise, 656 and 672nm are very deep red. A color image converting these wavelengths directly into RGB-values would be not very impressive, it would look more like the bottom image on the page. So usually a color mapping is used to generate impressive images which also show a lot of the detail information. With 3 different 'colors' in the source image, you can apply an arbitrary transformation to generate an RGB-image. For example, most images from the Hubble telescope use a common mapping which is consequently called the Hubble-telescope mapping. Like shown on the page, you can create very different looking images from the same data set by choosing the color mapping.

jankotek(10000) 5 days ago [-]

Under excellent conditions visual experience is very similar to pictures. Emission nebula only emits light at very narrow band, and filter will suppress stars and makes nebula more contrast.

jedimastert(4000) 5 days ago [-]

This link might help your question:

https://photographingspace.com/ap-color/

Short answer: the colors they mapped to green is actually closer to red and the color they mapped to blue is closer to green, so it would have less cyan (blue and green) and more magenta (blue and red). It would probably look a little more purple-ish.

irrational(10000) 5 days ago [-]

It blows my mind to zoom in on just one part of the picture and see how many stars there are. And then multiply that by the entire picture. And then multiply that by all the galaxies in the universe. My mind just isn't built to comprehend numbers that large.

el_benhameen(2980) 5 days ago [-]

And some of those 'stars' are actually galaxies themselves! Truly mind blowing.

saiya-jin(10000) 5 days ago [-]

Yeah, if universe doesn't make one humble, that probably nothing will. Due to recent photoshooting of M87 and its central black hole, I ended up on the page which lists the biggest black holes known to us [1], pretty humbling to read the details on some (ie quasars overshadowing its own whole galaxy so we see only it). Universe is surprisingly diverse

[1] https://en.wikipedia.org/wiki/List_of_most_massive_black_hol...

sandworm101(4017) 5 days ago [-]

Lol, nostalgia. Watching that image load slowly from top to bottom is so 1994 for me.

mixmastamyk(3409) 5 days ago [-]

Are you using zmodem? Believe it can resume a broken transmission. ;-)

veryworried(10000) 5 days ago [-]

It is not possible to ever see a scene like this even if one were sitting in deep space is it? These sorts of images are the result long exposures, but a human would only see blackness and stars, and maybe some faint puffs of light here and there.

sandworm101(4017) 5 days ago [-]

You wouldnt ever see the colors. They are far too dim without magnification. If you were standing in the cloud you would probably see it a little, like we see our galaxy as a blurry cloud, but only on the darkest nights.

ardy42(3950) 5 days ago [-]

Can someone give a quick explanation of the objects the picture? Are all the nebulas in the LMC or in the foreground? Is the LMC the reddish haze in the backround?

autocorr(2720) 5 days ago [-]

The nebula are regions where stars are forming in the LMC. In fact, the brightest and largest on the middle left, is the Tarantula Nebula, with the young forming star cluster 30 Doradus. 30 Dor is notable for being the biggest, baddest star forming region in the Milky Way or its satellites--it hosts a few hundred stars more massive than sixty times the mass of the sun in it's core, and hosts the candidates for the highest mass stars observed, above around 150 solar masses. If placed 100 times closer to be where the Orion Nebula is, its illumination would cast visible shadows, taking up a quarter of the night sky with a surface brightness on average that of Venus.

dtjohnnyb(10000) 5 days ago [-]

It's not quite your question, but here's a cool site that put its location in space in context for me: http://www.atlasoftheuniverse.com/sattelit.html

subcosmos(3301) 5 days ago [-]

The LMC is effectively a galaxy that we captured and started tearing stars from. The white disk is the core.

kunalpowar1203(10000) 5 days ago [-]

While spending about 30 minutes zooming in and marvelling at it, i came across https://imgur.com/fdb6JZH. Is this an exploding star?

avidmoon(10000) 5 days ago [-]

This is DEM L316. It might seem like one object, but these are the remnants of two different supernovas (of different types: smaller is Type Ia, bigger Type II).

[0] http://chandra.harvard.edu/photo/2005/d316/

hguhghuff(3851) 5 days ago [-]

I'd love to print this on 15 foot wide wallpaper for the kids bedroom.

colechristensen(10000) 5 days ago [-]

It wouldn't be cheap but I definitely could do that, 17' wide strips of photo paper aligned very carefully.

rodolphoarruda(3990) 5 days ago [-]

Well, that's what I'm doing tomorrow, but it will be for my living room. It's a 135x242cm piece of wall, which is a not as a square ratio as the image, so I had to crop it from its upper left corner down. Inkscape exported it to a 666MB PNG (ugh) I don't know why. So let's see...

gravy(3824) 5 days ago [-]

Full image http://www.cielaustral.com/galerie/photo95.htm?fbclid=IwAR3G.... (Linked in article) Really incredible.

ehsankia(10000) 5 days ago [-]

I opened the full image (14400x14200), which took a good minute to load, and spent some time just looking at every single dot in that picture, which there is a lot of.

ddingus(4071) 5 days ago [-]

Sometimes I envy what appear to be more interesting parts of the universe.

That image evokes all those feelings.

Then I remember interesting probably = bad, and we may not be living to appreciate it all.

anonytrary(3971) 5 days ago [-]

For anyone with a shitty computer like me, zooming into that is currently crashing my browser ̄\_(ツ)_/ ̄

ArtWomb(780) 5 days ago [-]

Magnificent! Congrats to Ciel Austral. We live in a New Golden Age in astrophysics and astronomy. With M87 black hole, TESS exoplanet catalog, Charon flyby, LIGO-Virgo, etc. ;)

"In an eternally inflating universe, anything that can happen will happen; in fact, it will happen an infinite number of times. Thus, the question of what is possible becomes trivial—anything is possible [...] The fraction of universes with any particular property is therefore equal to infinity divided by infinity—a meaningless ratio." -Alan Guth

http://physics.princeton.edu/~cosmo/sciam/

psykus(10000) 5 days ago [-]

Are there areas of the world with no light pollution where you see anything remotely like this with the naked eye? Any parts of the Milky Way?

newnewpdro(4071) 5 days ago [-]

No, the best you can do unaided is just a general view of the Milky Way.

But if you're willing to accept some optical aids like a reflector and eye piece, a large amateur 'light bucket' dobsonian telescope can unveil deep space objects to the naked eye.

I don't think it's possible to get anything like these photos though, the sensor is collecting light over a very long duration to present as a single image. The only way to get more light into your naked eye real-time is with more aperture, obviously there are practical limits there.

gotocake(1996) 5 days ago [-]

You won't see anything like that with the naked eye, period. We can't build up a composite of all photons our eye gathers over 1060 hours! There are places where you can get minimal light pollution and see amazing things though.

https://darksitefinder.com/map/

That site will help.

samirop(10000) 5 days ago [-]

Hi Everyone! I'm glad to see so much interest in this Image in this amazing community. I'm the CTO of Observatorio El Sauce, the observatory where this telescope is located. This place is a fully robotic observatory that provides a service called 'Telescope Hosting', which basically means that people send their telescopes and observe remotely from wherever they live. Why would they do that instead of observing from their backyards? That's because in that part of Chile we have the best sky quality for astronomy in the world, in terms of amount of clear nights a year (320 clera nights a year), light pollution (class 1 en Bortle scale), and in something called 'seeing' (average below 1'), which is a measure of the smallest thing the sky would allow you to image (the smaller the better). Thus, from our observatory our clients get the best possible image they can get with their telescopes.

Also, a good friend of mine developed this digital scope so you can zoom in the picture easily without going back to the 90ties internet experience: https://scope.avocco.com/case/20/eWKcUiIXpuQU9V0z

I'd be happy to answer to your questions :) Enjoy!

siavosh(804) 5 days ago [-]

Cool service! Can you say anything on how many telescopes you host, and ballpark price, and any history of how the observatory got started (land, permits etc)?

tathougies(10000) 5 days ago [-]

The fractalness (is that a word?) of the universe never ceases to amaze.

hguhghuff(3851) 5 days ago [-]

Fractosity?

Fractal nature of?

kartickv(3642) 5 days ago [-]

What kinds of image processing are used for this? Do the images need to be aligned, or can telescopes be pointed precisely enough? Are the images combined using mean/median, or something more sophisticated than that? What settings are the original photos captured with?

my_first_acct(1932) 5 days ago [-]

According to the official website [1] (which is linked to from TFA), the images were processed using PixInsight, a program popular among amateur astronomers. This page: [2] explains how PixInsight performs image alignment; it turns out to be a pretty complex (and interesting) process. The same page also explains the process of merging images.

[1] http://www.cielaustral.com/galerie/photo95.htm?fbclid=IwAR3G...

[2] http://pixinsight.com/doc/tools/StarAlignment/StarAlignment....

lokimedes(10000) 5 days ago [-]

They use an image stacking process that realigns a set of images to form the single image. See for instance https://rogergroom.com/astronomy-deep-sky-stacking-software/

dylan604(4102) 5 days ago [-]

There are several software options on image processing for astrophotography. Do a search for 'astrophotography image stacking', and you'll get a list of software, tutorials, videos, etc. A couple of the popular ones are Deep Sky Stacker[0] or PixInsight[1] or even Photoshop. They offer different options/capabilities.

The main thing about the capture settings is to use RAW. Other settings ISO/exposure time/etc is dependent on camera being used. However, whatever you can do to capture as much light as possible within each frame is the goal.

The software does image alignment rotate/scale/etc to do the stacking. You can stack images taken of the same object from different physical locations. Spend a weekend in the desert shooting an object, then spend another weekend the next month at the top of a mountain shooting the same object, and all of the images can be stacked.

The telescope alignment precision is important, but less so than it used to be for a couple of reasons. With gear available today, you can take 'portable' telescopes into the field, do a decent polar alignment and then allow the guide scope/software to correct for any imprecision of the main scope's alignment and even tracking issues from manufacturing issues with the mount's worm gear. A guide scope is a second smaller telescope (wider field of view) attached to the main scope with a camera attached to it. That camera is connected to a computer running the guide software, and will track a designated star. The guide software will talk to the telescope's motors, and can speed up/slow down the motors to keep the guide star to within a 1/4 pixel deviation.

Also, with digital cameras, images of shorter exposure times are taken and then stacked in software. There's multiple benefits to doing this. Consider exposing a single frame for 60 minutes, or 12 5 minute exposures, or 30 2 minute exposures. If anything bad happens during that exposure (a plane or a satellite crosses your view, someone uses a laser pointer through your frame of view, a bug lands on your primary, etc) it's not 'that big of a deal' to capture it again. Also, digital camera sensors tend to get noisy with longer exposures due to heat build up around the sensor (a problem film cameras do not suffer).

[0] http://deepskystacker.free.fr/english/index.html [1] http://www.pixinsight.com/

fizixer(10000) 5 days ago [-]

With these hi-res images I'm always curious which of the stars actually belong to the galaxy and which ones are 'noise', i.e., stars that are from our own galaxy 'blurring' the view.

I think filtering out 'local' stars should be very doable given ML/CV progress.

Moru(3846) 5 days ago [-]

There is probably some easier way of doing this but if your favourite tool is a hammer, everything starts looking like nails I'm sure :-)

NickNameNick(10000) 5 days ago [-]

With images taken 6 months apart, spanning earths orbit of the sun, you might be able to detect some parallax motion of the nearest stars.





Historical Discussions: Microsimulation of Traffic Flow (April 16, 2019: 743 points)

(743) Microsimulation of Traffic Flow

743 points 4 days ago by social_quotient in 3799th position

traffic-simulation.de | Estimated reading time – 2 minutes | comments | anchor

Your browser does not support the HTML5 canvas tag.

Traffic Flow and General

Density/lane

30/km

Truck Perc

10%

Timewarp

8 times

Car-Following Behavior

Max Accel a

0.3 m/s2

Max Speed v0

108 km/h

Time Gap T

1.4 s

Min Gap s0

2 m

Comf Decel b

3 m/s2

Lane-Changing Behavior

Politeness

0.1 m/s2

LC Threshold

0.4 m/s2

Right Bias Cars

0.05 m/s2

Right Bias Trucks

0.2 m/s2

  • Change the road geometry by dragging
  • Click onto the road to disturb traffic flow
  • Drag obstacles or construction vehicles to create new bottlenecks
  • Drag traffic lights to the road and click on them to toggle between red and light
  • Use the info button repeatedly for more info



All Comments: [-] | anchor

artificialidiot(10000) 4 days ago [-]

It is fun to watch cars changing lanes like maniacs as if it matters.

steadicat(3957) 4 days ago [-]

Lane change behavior looks completely unrealistic to me. I see cars switching lanes on deceleration into a traffic jam, which doesn't buy you anything (both lanes are stopped) and is extremely dangerous (you're cutting off cars approaching fast from behind and braking at the last minute). Never seen this happen in real life, except maybe at toll booths.

On the other hand, there is no lane switching when traffic on adjacent lanes clears up. In real life, drivers will quickly switch if a nearby lane starts moving. They can't see whether their own lane is also clearing up ahead, and most people won't take any chances.

fmduifaheuri(10000) 4 days ago [-]

Cool!

I've made something simpler yet similar way back (hit 'refresh' after the initial load, sorry for that bug):

http://sinelaw.github.io/jammed/src/

In my simulation each car has a 'personality', which affects lane passing, following distance/time, speed, acceleration, etc.

Did it as an exercise in JavaScript + HTML Canvas programming...

mysterydip(10000) 4 days ago [-]

I'm working on simple traffic simulation for a game that takes place in a city so the main link and yours came at a perfect time. Thanks!

User23(3312) 4 days ago [-]

I'd love to see an addition to this for motorcycle lane-splitting. I assume that's illegal in Germany though. Still though, if you want to keep your CO2 emissions down in exchange for a somewhat higher risk of death or dismemberment, motorcycles are a great choice. Oh and they're cool so there's that.

s_y_n_t_a_x(10000) 4 days ago [-]

It's not a somewhat higher risk of dying, it's 35 times higher.

dosy(2893) 4 days ago [-]

Would self-driving AI cars change these simulations? Seems they could coordinate to optimize for flow.

cpeterso(59) 4 days ago [-]

There are major safety concerns if your car is taking anonymous, untrusted input on how it should drive. People could do anything from telling other cars to move out of their way to maliciously causing accidents.

michaelt(3942) 4 days ago [-]

There are designs like platooning [1] which, in theory, would allow self-driving cars to drive bumper-to-bumper at high speeds, accelerating and braking together like the carriages of a train.

Of course, for that to work every car has to trust the data it gets from every other car, so there's no place for bugs or user-modifiable software in such a system. And the stopping distance is the worst of any car in the platoon.

[1] https://en.wikipedia.org/wiki/Platoon_(automobile)

dsfyu404ed(10000) 4 days ago [-]

Too bad you can't adjust the speed of the in/out flows. I was looking forward to creating evidence that indicates people who suck at merging and take off ramps too slow screw it up for everyone else.

Being able to adjust the min/max and standard deviation of traffic speeds would be nice too.

AAmarkov(10000) 4 days ago [-]

This was my hypothesis as well.

You can set the maximum speed to the max. While this doesn't directly control on-ramp flows, the cars already on the highway will be moving much faster than cars on the on-ramp due to acceleration/distance constraints. Then a jam ensues

takk309(10000) 4 days ago [-]

This is a neat toy simulation. I do enjoy the depth of the drivers' behavior. I am a traffic engineer and traffic simulation is a large portion of my job. Professionally I use PTV Vissim. [1] It allows me to build just about any road network configuration imaginable.

[1] http://vision-traffic.ptvgroup.com/en-us/products/ptv-vissim...

jordan801(10000) 4 days ago [-]

I've been looking for a decent traffic simulator. Can you modify driver behavior? I am really curious how much traffic could be reduced by changing driver behavior, and simple vehicle modifications.

For example, how much time could be saved if at red lights, when the light changes green, if all cars accelerate at the same time and speed, instead of the accordion behavior. This is something basic driver AI could handle and might help with pollution and traffic, while fully autonomous driving is still underway.

godelmachine(357) 4 days ago [-]

Went through their Git but couldn't deduce much.

Do they use Poisson distribution, by any chance?

Sohcahtoa82(10000) 4 days ago [-]

I'd certainly love to be about to distribute some poison to some of the drivers out there.

olliej(4118) 4 days ago [-]

I've seen a bunch of things like this that all seem to imply entry to a round about is governed by stop signs - they're meant to be governed by "give way"/yield so that they don't impede traffic when there isn't significant load, so they always end up showing similar characteristics to stop signs (not quite as bad, but that's not a high bar)

Personally though I've always wanted to make a genetic algorithm implementation that models this but where card can evolve when and what signals they give and how they respond to those signals. Fitness function would be speed to get to destination, crashing would be a failure.

I think it would be interesting to see what happened

datenwolf(4044) 4 days ago [-]

My personally favorite 'roundabout' is this one here:

https://www.google.com/maps/@48.4324714,13.0948723,637m

It was built about 20 years ago, and I was living in the area back then.

Before the 'roundabout' was built, being a traditional intersection it used to be a major accident hotspot, despite having a speed limit of just 50km/h. Crossing the main road was nightmare, I can tell you that.

With the new topology, which is a very elongated roundabout, the speed limit on the main axis could actually be raised to 80km/h yet accidents now rarely happen there and traffic flow is much improved

mikhailfranco(4049) 4 days ago [-]

In many places, road engineers deliberately obscure visibility of the roundabout from the approaches. So even if it is legally a 'yield', in practice, stopping is enforced by unaccountable manipulation.

Yet another example of LCD governance imposed by the crazy minority - in this case, those who would not yield correctly (see Skin In The Game, NNTaleb). Always remember that 50% of drivers are below average intelligence.

jdashg(10000) 4 days ago [-]

This does a great job of demonstrating how increasing acceleration decreases traffic. 2m/s/s is 5mph/s, and helps a lot: Cars in front get out of the way faster. Lazy acceleration, while more accurate to what I see on the road, causes gridlock and traffic waves.

Perhaps more surprisingly, /reducing/ comf(ortable?) deceleration keeps things running smoother than higher values.

jrootabega(10000) 4 days ago [-]

My gut tells me that you should just gravitate towards the area of less cars. If the road ahead is clear or clearer, higher acceleration is better for everyone. If the road ahead is saturated but clearer behind, lower acceleration is better. If it's saturated in front and behind, good luck.

canofbars(10000) 4 days ago [-]

Lowering the max speed limit also had a massive effect on stopping the stopped waves.

IneffablePigeon(10000) 4 days ago [-]

>/reducing/ comf(ortable?) deceleration keeps things running smoother than higher values

This is basically what the dynamic speed limits do on motorways in the UK - if there's congestion, the speed limit is reduced on the section of road leading up to the congestion. It reduces the number of cars flowing into the traffic jam temporarily, which helps to smooth out the overall flow.

sandworm101(4017) 4 days ago [-]

Lol. I really enjoy these online traffic simulators. Specifically, I like seeing how people manipulate them to suit their own ideas about 'proper' driving.

There is no easy answer. Aggressive driving = bad. Tailgating = bad. But see what happens when you crank up the acceleration and decrease distance between cars. Then do the opposite. Simple little simulations like this teach us that bad driving is actually good for traffic.

One very interesting lesson is that speed seems largely irrelevant to traffic flow. So long as people don't leave gaps, that they all follow each other closely and accelerate/brake like racers, net flow is divorced from speed.

nexuist(10000) 4 days ago [-]

There is an easy answer. Put a computer in charge of it (well, okay, we're all mostly software engineers and we all know it's not that easy - but - you get the point).

Aggressive driving and tailgating are bad because humans are not capable of making good choices at those distances/speeds. If you could 10x our reaction speed, then those actions would not be as looked down upon (going 6-8 over the speed limit is technically illegal most states, but nobody on the road is going to think you're some kind of maniac).

Simple little simulations like this teach us that simple little simulations do not account for reality and the complex nature of the meatbags in control of each vehicle. A more realistic simulator might be able to roll the dice that a driver might not notice an obstacle in time, and crash straight into it - putting more debris and obstacles on the road for every other driver to avoid/crash into[1]. The gains in net flow from have everyone drive like madmen would probably immediately be wiped out from the hour(s) spent getting emergency vehicles onto the scene, redirecting traffic, treating the driver, towing the vehicle, etc. Not to mention that there would almost certainly be multiple scenes down the road - as there usually already are during rush hour - and we already condone this type of behavior!

[1] https://en.wikipedia.org/wiki/Rubbernecking

licyeus(10000) 4 days ago [-]

I noticed the same thing re: high acceleration, but that + decreased distance between cars in real world would lead to increased collisions, I'd assume.

Simulators like this (and seeing ideal simulation behavior deviate from correct real-world behavior) concern me re: the coming mix of AV and human drivers.

tokyodude(10000) 4 days ago [-]

It's a neat simulation and great for seeing traffic ripples. But, I'm finding putting a cone in a single lane blocks cars forever. That would never happen. The car at the cone would force their way into an unblocked lane.

executesorder66(3521) 4 days ago [-]

It depends which settings you use. If you turn politeness all the way down, then the people in the blocked lane will force themselves into an unblocked lane.

herghost(3696) 4 days ago [-]

I was curious so I tested the 'on ramp' strategy that seems to be in place on UK motorways at rush hour - namely that they are traffic light controlled, but that they always seem to be Green for ~3 cars, Red for ~6 cars.

Always seemed really pointless.

Turns out that was the only way I could get traffic going again on the curved road with the on-ramp (even though I could only control the main traffic, not actually the on-ramp). Even dropping the speed limit had no effect on clearing the congestion at the junction.

Sohcahtoa82(10000) 4 days ago [-]

In all the traffic light controlled on-ramps I've seen in the USA, the lights specifically only allow 1 car per green (and they have signs indicating such).

They make a lot of sense. Traffic entering an on-ramp will usually come in bursts from traffic light. A burst of cars will have a much harder time merging with traffic than a trickle. So rather than creating, for example, a group of 20 cars every 60 seconds trying to merge at once, the on-ramp traffic light throttles that to 1 car every 3 seconds. Total throughput is the same, but the effect on existing highway traffic is significantly reduced.

takk309(10000) 4 days ago [-]

In the US we call this ramp metering. It is usually a dynamic system that monitors the upstream traffic density and only allows as many vehicles as can be added to reach a maximum density. We usually allow only 1 car per green signal. There have been some attempts to use them on roundabouts too.

mk89(10000) 4 days ago [-]

Roundabouts with arms priority lead to deadlocks, I never thought about this.

Interesting to see this in practice and to just give the rule 'ring has priority' for granted.

Franciscouzo(10000) 4 days ago [-]

There's a big roundabout in my city that's in construction so some lanes are closed, when there's no traffic cops, people entering it cut you off, creating deadlocks, I've been stuck in that roundabout for >15min some times.

ghostly_s(10000) 4 days ago [-]

Aren't you describing the difference between roundabouts and traffic circles? Maybe this is US-specific terminology.

thestepafter(4094) 4 days ago [-]

First, this is awesome. I have always been fascinated by traffic flow and how it works. This assuages my curiosity in so many ways.

Second, removing all of the politeness has an interesting effect. I think I can learn some lessons from this, thank you!

Third, they should use this in driver education classes.

KallDrexx(10000) 4 days ago [-]

Having literally just coming back from India after two weeks there, turning the politeness to zero perfectly described the traffic flows (and lack there-of) I saw there.

atoav(10000) 4 days ago [-]

If this fascinates you check out Sam Bur's youtube channel, a city planner playing Cities Skylines with foxus on traffic stuff.

siavosh(804) 4 days ago [-]

This is incredible, I put a cone in the middle of the road and realized the nightmare I had caused, and then quickly removed it. But the nightmare just continued to ripple through space and time. This explains all those mysterious traffic jams I've been in that seemed to defy reason.

systemtest(3797) 4 days ago [-]

The solution is simple. Lower the speed limit to 20kph, wait for the ripple to disappear and slowly raise the limit to 100kph. The reason this doesn't work in real life is lack of dynamic signage and because people don't adhere to the speed limit.

hliyan(1827) 4 days ago [-]

On the circular road, did you try clicking a single vehicle to slow it down just a tad? I did that (just one vehicle) and watched in horror as a phantom jam developed. Just 2 slow vehicles and there arose a bumper-to-bumper, wave of stationary cars.

zamalek(10000) 4 days ago [-]

Next, increase following distance to improve the situation.

kozhevnikov(4111) 4 days ago [-]

Here's a real world example of a traffic wave due to one person's sudden break

https://i.imgur.com/fLNs3k0.gifv

yreg(10000) 4 days ago [-]

[CGP Grey did a video on this](https://www.youtube.com/watch?v=iHzzSao6ypE). Your goal, as a driver, should be to try to maintain equivalent distance between your car and cars in front of you and behind you.

habosa(3505) 4 days ago [-]

Anyone know a good traffic simulator for cities that includes pedestrian behavior?

I always wonder if, at busy pedestrian intersections, it would make more sense to have the pedestrian flow stages and the vehicle flow stages be totally separated. Cars waiting to turn right for ~30s as pedestrians walk is kinda crazy.

zukzuk(10000) 4 days ago [-]

Cities in Motion and its sequel were pretty good and pretty serious transit simulators (https://en.wikipedia.org/wiki/Cities_in_Motion_2)

0xffff2(10000) 4 days ago [-]

Not a simulation, but I have observed that some of the intersections around UCLA are set up like this, so it makes sense to at least one traffic engineer in the world.

globuous(3743) 4 days ago [-]

I'm currently studying traffic with using reinforcement learning with:

- Microscopic simulator: https://sumo.dlr.de/index.html

- Flow (abstraction of RLlib and Sumo): https://flow-project.github.io/

Have a look at Sumo, its free, open source, and comprehensive. Also Aimsun [1] but I've never used it and don't think it's free although I'm unsure about that.

[1] https://www.aimsun.com/aimsun-next/

bobthepanda(10000) 4 days ago [-]

The problem with scrambles is that people hate waiting and get tempted to jaywalk because the cycle times get really high. Hence why they tend only to be used at places with more than 4 ways intersecting.

New York uses Leading Pedestrian Intervals, which give pedestrians the green before the cars can turn right; by the time a driver starts turning pedestrians are actively crossing the street and no longer in the blind spot. Another good way to increase pedestrian safety is just to ban right-on-red altogether.

amelius(867) 4 days ago [-]

My takeaway from playing with the controls: once you have a traffic jam, no amount of politeness is making it go away.

noxToken(4113) 4 days ago [-]

Politeness restricts the issue to one lane without making it worse for everyone else. You and all of your lane mates suffer, but everyone gets to get to their destination at a normal time.

jaden(10000) 4 days ago [-]

After even a slight slowdown, I was unable to ever restore the vehicles to steady traffic again. Adding lanes, quick stop lights to ease the flow. It was maddening (like real traffic I guess...)

teakdust(10000) 4 days ago [-]

I think cars should "tug" on each other (communicating to eachother over RF somehow) and also act as a cushion (braking).

When an intersection light turns green - all cars should begin rolling together. Waiting for cars in front of you in succession is burning time.

VierScar(3950) 4 days ago [-]

Waiting for cars in front of you is for safety and would be the case even with fully automated nationally adopted self driving electric cars.

When you're stopped you don't need a buffer so much for when things go wrong, but at higher speeds you need a bigger buffer, and having space helps avoid mass catastrophes when some freak storm knocks a tree branch onto the road or a car malfunctions or mechanical failure/wear and tear.

yason(3188) 4 days ago [-]

When an intersection light turns green - all cars should begin rolling together

Will not work. The safe distance between cars is negligible in traffic lights but a lot more in moving traffic. At least two seconds is considered reasonable although in reality much lower distances are seen.

The 'rolling together' would only work if the queue to the traffic light would copy the distance needed by moving cars.

That is, the cars would be spaced 10-40 meters apart while waiting for green. Only then all cars could floor it and accelerate together without having to wait for the first units to run away first.

adrianmonk(10000) 4 days ago [-]

I've pondered a similar question: is pulling up right behind the car in front of you the optimal behavior for a red light? What if you could get everyone to leave a large gap (on the order of several car lengths)? Could you increase the number of cars that make it through a light while it's green? Or would it not matter?

Human nature is to pull as far forward as you can, within certain limits at least. But is that actually optimal? If not, with self-driving cars or some other technological assistance, maybe we could increase the efficiency of intersections that use signals.

executesorder66(3521) 4 days ago [-]

> When an intersection light turns green - all cars should begin rolling together.

Agreed. I've always suspected/known this. Just play with the acceleration settings in the simulation for proof.

I hate when the light goes green and people wait for a gap before driving. Just go, you are barely going 20km/h at this stage.

If everyone accelerated quickly until they were on the other side of the traffic light, you could get double the amount of cars through. No need to break the speed limit.

mariefred(10000) 4 days ago [-]

Scania, and probably others, are working on platooning trucks. From what I know it is not trivial, for example most of communication methods we have have too long round trip delays and are not 100% reliable so a backup is needed maybe in the form of autonomous breaking system.

https://www.scania.com/group/en/platooning-automated-driving...

https://platooningensemble.eu/news/using-its-g5-for-efficien...





Historical Discussions: Animating URLs with JavaScript and Emojis (April 17, 2019: 723 points)
Animating URLs with JavaScript and Emojis (January 17, 2019: 6 points)
Animating URLs with JavaScript and Emojis (February 14, 2019: 4 points)
Animating URLs with JavaScript and Emojis (February 15, 2019: 3 points)
Animating URLs with JavaScript and Emojis (January 21, 2019: 3 points)

(740) Animating URLs with JavaScript and Emojis

740 points 3 days ago by bemmu in 182nd position

matthewrayfield.com | Estimated reading time – 8 minutes | comments | anchor

Animating URLs with Javascript and Emojis

You can use emoji (and other graphical unicode characters) in URLs. And wow is it great. But no one seems to do it. Why? Perhaps emoji are too exotic for normie web platforms to handle? Or maybe they are avoided for fear of angering the SEO gods?

Whatever the reason, the overlapping portion on the Venn diagram of 'It's Possible v.s. No One Is Doing It' is where my excitement usually lies. So I decided to put a little time into the possibilities of graphical characters in URLs. Specifically, with the possibility for animating these characters by way of some Javascript.

Loopin'

First off, make sure your page's Javascript code is being labelled as UTF-8 or you're gonna have a bad time putting emoji in your code at all. This can be accomplished via an HTTP header, or page META tag. There's a good chance you don't have to worry about this. But you can find more info about this here: Unicode in Javascript by Flavio

To achieve our desired outcome of emoji dancing like sugar plum fairies in our address bar, we need a loop. And really, all we need is a loop. We start the loop, it loops, and we're happy. So here's our first loop, a spinning emoji moon. I think when they added this sequence of emoji they must have had this in mind right?

var f = ['🌑', '🌒', '🌓', '🌔', '🌝', '🌖', '🌗', '🌘'];
    function loop() {
        location.hash = f[Math.floor((Date.now()/100)%f.length)];
        setTimeout(loop, 50);
    }
    loop();

Run Moon Code:

You can click the toggle checkbox above to see the result of this loop in your URL bar.

If you don't like the spinning moons you can swap out that array with whatever emojis you want. Like a clock:

var f = ['🕐','🕑','🕒','🕓','🕔','🕕','🕖','🕗','🕘','🕙','🕚','🕛'];

Run Clock Code:

This is a real simple example. Too simple really. So let's upgrade our loop so that it generates a string of multiple emoji! This time we're utilizing the emoji 'skin tone modifiers' characters to make some color-changing babies:

var e = ['🏻', '🏼', '🏽', '🏾', '🏿'];
    function loop() {
        var s = '',
            i, m;
        for (i = 0; i < 10; i ++) {
            m = Math.floor(e.length * ((Math.sin((Date.now()/100) + i)+1)/2));
            s += '👶' + e[m];
        }
        location.hash = s;
        setTimeout(loop, 50);
    }
    loop();

Run Babies Code:

We use a sine wave controlled by time and position to select which color we want. This gives us a nice loopy color changing effect!

Or how about we revisit our moon spinner, spread it out, and make something resembling a loading indicator? Sure, let's do it:

var f = ['🌑', '🌘', '🌗', '🌖', '🌕', '🌔', '🌓', '🌒'],
        d = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        m = 0;
    function loop() {
        var s = '', x = 0;
        if (!m) {
            while (d[x] == 4) {
                x ++;
            }
            if (x >= d.length) m = 1;
            else {
                d[x] ++;
            }
        }
        else {
            while (d[x] == 0) {
                x ++;
            }
            if (x >= d.length) m = 0;
            else {
                d[x] ++;
                if (d[x] == 8) d[x] = 0;
            }
        }
        d.forEach(function (n) {
            s += f[n];
        });
        location.hash = s;
        setTimeout(loop, 50);
    }
    loop();

Run Multi-Moon Code:

Exploring Other Characters

But it's not just emoji that give us a means to pump graphics out of our URL bar. There's a whole boatload of unicode characters of interest to our goals.

Particularly interesting are the Box-drawing Characters:

Many of these lend themselves better to a two dimensional output. But they're still pretty good on the single line we have to play with. For instance we can make a string of multiple height varied block characters and construct a nice little wave:

function loop() {
        var i, n, s = '';
        for (i = 0; i < 10; i++) {
            n = Math.floor(Math.sin((Date.now()/200) + (i/2)) * 4) + 4;
            s += String.fromCharCode(0x2581 + n);
        }
        window.location.hash = s;
        setTimeout(loop, 50);
    }
    loop();

Run Wavy Code:

I liked this look so much I put it up permanently at wavyurl.com.

Using the variable width characters we can even wiggle on the horizontal, creating something like a progress bar:

function loop() {
        var s = '',
            p;
        p = Math.floor(((Math.sin(Date.now()/300)+1)/2) * 100);
        while (p >= 8) {
            s += '█';
            p -= 8;
        }
        s += ['⠀','▏','▎','▍','▌','▋','▊','▉'][p];
        location.hash = s;
        setTimeout(loop, 50);
    }

Run Progress Bar Code:

A progress bar huh? That's like, almost useful. Which brings me to...

Displaying Video Progress In The URL Bar

In an attempt to reduce the frivolity in our little experiment, I came up with the idea to show a web video's progress in the URL. I simply attach a function that renders our progress string to the 'timeupdate' event for a video, and voila! A video progress indicator in the URL, complete with the time and duration!

var video;
    function formatTime(seconds) {
        var minutes = Math.floor(seconds/60),
            seconds = Math.floor(seconds - (minutes*60));
        return ('0'+minutes).substr(-2) + ':' + ('0'+seconds).substr(-2);
    }
    function renderProgressBar() {
        var s = '',
            l = 15,
            p = Math.floor(video.currentTime / video.duration * (l-1)),
            i;
        for (i = 0; i < l; i ++) {
            if (i == p) s +='◯';
            else if (i < p) s += '─';
            else s += '┄';
        }
        location.hash = '╭'+s+'╮'+formatTime(video.currentTime)+'╱'+formatTime(video.duration);
    }
    video = document.getElementById('video');
    video.addEventListener('timeupdate', renderProgressBar);

Run Video Progress Bar Code:

With the above checkbox checked, you can use the video below to try it out.

I rather like this lines and circle progress bar, but if you fancy some moon emoji, I've got you covered:

var e = ['🌑', '🌘', '🌗', '🌖', '🌕'],
        video;
    function formatTime(seconds) {
        var minutes = Math.floor(seconds/60),
            seconds = Math.floor(seconds - (minutes*60));
        return ('0'+minutes).substr(-2) + ':' + ('0'+seconds).substr(-2);
    }
    function renderProgressBar() {
        var s = '',
            c = 0,
            l = 10,
            p = Math.floor(video.currentTime / video.duration * ((l*5)-1)),
            i;
        while (p >= 5) {
            s += e[4];
            c ++;
            p -= 5;
        }
        s += e[p];
        c ++;
        while (c < l) {
            s += e[0];
            c ++;
        }
        location.hash = s+formatTime(video.currentTime)+'╱'+formatTime(video.duration);
    }
    video = document.getElementById('video');
    video.addEventListener('timeupdate', renderProgressBar);

Run Video Moons Progress Bar Code:

Okay, calling this progress bar 'useful' is a stretch. But if I squint, I can almost see a scenario where it would be useful to have this in a video sharing URL. Like YouTube has the option of creating a link to a video at a specific time. Might it not be cool to include a visual indication? Hmmm?

Maybe there is some more useful implementation of this 'technology' that I haven't come up with. I'll keep thinking on that. And hey, maybe you can come up with something?

One Last Thing

You may be wondering why I used 'location.hash =' instead of the newer and shinier HTML5 History API. Two reasons. One solvable. The other less so. Both inconvenient.

Issue 1 is also a feature of the History API: It actually changes the whole URL path, not just the hash. So if I use the History API and change our page to '/🌑🌘🌗🌖🌕', it'll look nicer than having tacked on a #. But it also means my web server must be able to response to '/🌑🌘🌗🌖🌕', or the user will be out of luck if they refresh, or otherwise navigate to the modified URL. This is doable, but trickier than using 'location.hash =' which doesn't require me to prepare the server in any special way.

Issue 2 is more unexpected. Turns out that in 2 out of 3 browsers I tested, the History API is throttled. If I push my wavy URL characters to the address bar at a fast rate I'll eventually get the following error in Chrome:

Throttling history state changes to prevent the browser from hanging.

Safari is nice enough to give us a bit more info:

SecurityError: Attempt to use history.pushState() more than 100 times per 30.000000 seconds

Now if I stay under that limit I'm fine. But c'mon, 3 frames a second just doesn't cut it for the ooey gooey URL animations I desire.

Good boy Firefox on the other hand doesn't seem to give a hoot how many times I push a new history or how quickly. Which is gosh darn thoughtful of it. But breaking in two major browsers, plus neccesitating the web server configuration to fix Issue 1, makes me willing to put up with a little # in the URL.

The End ?

I'll leave it there. But I will tell ya that I've got a few ideas for making tiny games that display in the URL bar. Especially given the Braille Characters that we have yet to explore. So stay tuned for that.

If you have questions, comments, or simply want to keep up with my latest tinkerings, check me out on Twitter: @MatthewRayfield. Or subscribe to my almost-never-bothered email list here.

Oh and if you want the source for these URL mutilating abominations wrapped up in nice little ready-to-run HTML files, here you go ;]

Bye for now!




All Comments: [-] | anchor

throwaway287391(10000) 3 days ago [-]

I realize everyone hates videos on HN and probably scrolled right past it, but this guy is hysterical. Worth watching the first couple minutes at least.

odyssey7(10000) 3 days ago [-]

You got me to go back and watch it. It was great.

sdan(3619) 3 days ago [-]

Company checks on Employee's search history So what's so interesting about matthewrayfield.com that you had to go there 59,000 times in the last hour?

saagarjha(10000) 3 days ago [-]

I'm sure when they see the website for themselves they'd agree with you ;)

peteforde(1199) 3 days ago [-]

Updating the hash of a URL doesn't (or shouldn't) make a request to the server, so your abstract employee hero is safe.

kkarakk(10000) 3 days ago [-]

Just checking to see if you trust me boss ;)

lwansbrough(4085) 3 days ago [-]

Those idiots over at Google AMP are working on destroying the web when they could be working on something productive like this.

revskill(3884) 3 days ago [-]

It's sad/funny that some big companies adopt AMP in their products even faster than Google themselves.

dymk(10000) 3 days ago [-]

Is this satire, or...?

lifthrasiir(2633) 3 days ago [-]

This page will wreck your browser history. The post explains the rationale at the end, but frankly speaking there should have been the warning for each checkbox.

Diti(4108) 3 days ago [-]

I'm on mobile. The checkboxes had no effect, the code run on its own by default. And I gave up on trying to go back to Lobsters by using the Back button.

ballenf(4002) 3 days ago [-]

Your comment made me realize that there are people who curate their browser history (other than just removing problematic stuff). Not criticizing, just never considered that was something anyone cared about.

hyeomans(10000) 3 days ago [-]

I think it also crashed LittleSnitch

craftycode(10000) 3 days ago [-]

It shouldn't. It uses history.replaceState(). https://developer.mozilla.org/en-US/docs/Web/API/History_API...

core1024(10000) 3 days ago [-]

If you're using Firefox you can Control+H then right click on the site and select the option labeled 'Forget About This Site'. Done.

tomc1985(10000) 3 days ago [-]

> ou can use emoji (and other graphical unicode characters) in URLs. And wow is it great. But no one seems to do it. Why? Perhaps emoji are too exotic for normie web platforms to handle? Or maybe they are avoided for fear of angering the SEO gods?

No, we don't because it's stupid. Thank god e-mail spam doesn't seem to do this very much with subject lines, what is so wrong with putting TEXT where text belongs?

Welcome to Web 3.0, home of shitty memes and stupid emoji gimmicks

sydd(10000) 3 days ago [-]

its a joke.

droptablemain(4057) 3 days ago [-]

'Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.'

ralusek(4107) 3 days ago [-]

That was literally the first thing they did! They stopped, thought about whether or not they should, and then did! And now we have animated url emojis and an island filled with dinosaurs and what could possibly go wro

bork1(10000) 3 days ago [-]

One downside is that it basically renders the back button useless. I ran the examples you provided (which were great fun!) and when I tried to get back to hackernews I realized each step in the animation had been added to my browser history

zuppy(10000) 3 days ago [-]

you can use replaceState for that and back will work fine

codedokode(4109) 3 days ago [-]

Maybe browsers should go back not to previous URL, but to previous domain.

yyx(10000) 3 days ago [-]

My firefox got OOM killed thanks to this.

nialv7(4101) 3 days ago [-]

You are probably using Simple Tab Group

kkarakk(10000) 3 days ago [-]

wavyurl.com works amazingly well on chrome mobile but firefox mobile just puts the unicode characters there. BOO!

carc1n0gen(10000) 3 days ago [-]

Working perfectly fine on my firefox

peteforde(1199) 3 days ago [-]

I love the creativity of the hack, especially when he sync'd the video position with the scrubber. Respect.

I also love the guy's vaguely deranged, Adult Swim -inspired commentary. I went from hater to fan in about 30 seconds, once I realized that he must idolize Tim Heidecker. Again, respect.

However, what I love the most is how the video was composited and edited. What tools do you use to pull that together without slaving over every detail? It looks deceptively low-budget but there's a lot of really subtle details in the way he zooms in on the URL, tweens and flips his video around that make the whole thing feel like a Tim and Eric special feature.

I'd love to know how to produce that sort of result on a meaningfully short timeline.

aasasd(10000) 3 days ago [-]

Youtube poops and endless music video memes are some of the best things to come out of the meme culture. Because folk culture is now truly 'multimedia' and we have a horde of people for whom hyperactive video and sound editing is second nature. Now just to wait for when it becomes mainstream and commoditized like graphics editing.

I'm hoping for about the same fate for knowledge visualization techniques—might see it happen soon enough now that 'data science' is in vogue.

MatthewRayfield(4088) 3 days ago [-]

Heyyyy thanks glad you enjoyed it! I was surprised to wake up and see this on Hacker News today.

Thought I'd comment on the video editing:

It is all pretty low tech. I use Adobe Premiere to edit and mostly just animate transform, scale, and crop effects to achieve everything. Like for the zooms you pointed out I double up a layer of the screencap and then crop and scale. It's a bit painstaking at times.

Another little part is when I do the screen recording I set my desktop background to an image with a colored box area of 1280x720. This is the area I know will be 'on screen'. So I have the windows I want to pull in during the video just outside this area. Then I crop to this box in editing. I think this is better looking than just capturing the whole desktop, and I like the live feel.

I'm enjoying evolving this style... I sometimes have thoughts of making some kind of gross homemade video compositing tool in JS. But I haven't gotten there yet...

jjar(3614) 3 days ago [-]

Note: you must have <meta charset='utf-8' /> in <head> for this to work.

recursive(10000) 3 days ago [-]

Or you can just use \u escape codes and keep all your source in ASCII.

z3t4(3839) 3 days ago [-]

Am I the only one thinking about buying many domain names to do the animation on the actual TLD ?

Traubenfuchs(10000) 3 days ago [-]

Would that not reload the websites? Then it would only make sense if you have super low latency, tiny payload and don't need to scroll.

deno(3863) 3 days ago [-]

You can't do this on the domain name, or even subdomain. Changing origin requires actual navigation, and would probably trigger loop detection.

Ayesh(10000) 2 days ago [-]

Probably yes. In browser history updates, browsers will trigger a regular page navigation if the domain changes.

I suppose with keep-alive, all sub/top domains in the same TLS certificate, and immutable far-future caches, one could pull off an animation at ~5 fps.

peteforde(1199) 3 days ago [-]

... damn, dude.

hombre_fatal(10000) 3 days ago [-]

Am I going crazy or haven't I used websites that update the favicon to indicate things like new notifications?

Here's a 16x16 pixel game of Defender implemented by animating the favicon: http://www.p01.org/defender_of_the_favicon/

bubblicious(3277) 3 days ago [-]

Shameless plug for a tiny project I did a while back that does exactly this: http://nicolasbize.com/faviconx/

dgellow(625) 3 days ago [-]

Github does this for pull request status, it's really great

nwsm(3831) 3 days ago [-]

There are a few ccTLDs that allow emojis in domains. [0] So you could buy all the ones you need for an animation and set up a redirect loop.

[0]: https://en.wikipedia.org/wiki/Emoji_domain

Technetium_Hat(10000) 3 days ago [-]

Unfortunately they still appear as punycode in most contexts.

cjblomqvist(4074) 3 days ago [-]

Cool! It did however completely break my back button....

mrighele(10000) 3 days ago [-]

Because it manipulated the hash component directly. By using the 'replaceState' method of the history API [1] you could avoid polluting the browser's history.

[1] https://developer.mozilla.org/en-US/docs/Web/API/History_API

serpix(10000) 3 days ago [-]

Better trash the past hour of history since every animation is a separate history entry :D

lazyjones(4036) 3 days ago [-]

Somewhat fixable (this is for moon.html):

            var n = 0;
            function loop() {
                location.hash = f[Math.floor((Date.now()/100)%f.length)];
                if (n++ > f.length) {
                    n = 0;
                    window.history.go(-f.length);
                }
                setTimeout(loop, 50);
            }
This actually doesn't work properly and jumps back too far sometimes because the URLs move with time and not a counter, but you get the idea.
torarnv(10000) 3 days ago [-]

For some reason it also broke Safari on iOS to the point where I couldn't open HN via a bookmark either. Had to close the tab completely.

oevi(4111) 3 days ago [-]

This got me thinking about some standardized control elements websites could put in the address / menu bar.

Something like media controls, progress bars or even consent banners. This way they would have an unified appearance and could be user scriptable.

reaperducer(3922) 3 days ago [-]

Unless someone browses full screen or bookmarks the app to their home screen on mobile so there is no visible url bar to animate.

TheKarateKid(10000) 3 days ago [-]

I'm just waiting for mainstream media sites to abuse this to put: 20% OFF SUBSCRIBE NOW in the URL of their page.

Edit: HN doesn't support emoji in comments. That was supposed to be surrounded by sirens.

veryworried(10000) 3 days ago [-]

If you ever see emoji in hackernews comments then hackernews, as it has been, is over.

benguild(2609) 3 days ago [-]

This reminds me of putting marquees in the window.status like 20 years ago

Tade0(10000) 3 days ago [-]

The spirit is there definitely.

My take is that it's only a matter of time before emojis creep into every area where text is used.

Let's hope they won't ever become a mainstream part of programming languages.

umvi(10000) 3 days ago [-]

Use incognito mode for this site... your back button will thank me

tracker1(4099) 3 days ago [-]

Or, just open in a new tab.

sureaboutthis(10000) 3 days ago [-]

Another perfect example of the childish activities of the web today. Remember the days when postings were about serious technical advancements and not random hobbyist activities?

MeltySmelty(10000) 3 days ago [-]

lol you are a fucking bitch

tracker1(4099) 3 days ago [-]

I remember when it was about animated 'under construction' gifs on millions of pages across the likes of geocities, and everyone self-publishing... until the social networks came... MySpace allowed for customized css which had a lot of user activity... then facebook normalized it all.

While I do appreciate the clean aesthetics most of the time... I really do like to see the creative individuals gets some things in.

Additional ideas mentioned include animated favicons and titles... I'd throw in that an audio waveform with music would be cool too.

toastking(10000) 3 days ago [-]

I mean if you mean ARPAnet then yeah. The web was always a place for creative people to meet and talk about the eccentric stuff they were into.

pault(4050) 3 days ago [-]

What? Stuff like this is what made the web fun. At least he put it on his own blog instead of facebook.

penagwin(4099) 3 days ago [-]

> Remember the days when postings were about serious technical advancements and not random hobbyist activities?

Not really no? Certainly not in the last 20 years at least.

OskarS(4111) 3 days ago [-]

This is very cute and all but... you know... don't actually do this! Please. Even if you make it so that it doesn't break the back button or history, URLs are things that people copy and paste and send to people. They are put into Word-documents, bookmarks and archive.org. It's not a place for dynamic animations.

URLs should be static, simple, and as short as is reasonable.

Kiro(3689) 3 days ago [-]

That's not for you to decide.

dosy(2893) 3 days ago [-]

Would you approve animating the title?

askmike(4021) 3 days ago [-]

While in the video he's updating the location hash (which breaks the backbutton), one can implement this using history.replaceState[1] to keep the back button working normally. He also mentions parsing the animated URL the figure out how far the video was loaded. So if you want you can use the animation in terms of navigation. Or alternatively just put the animation behind a separator and have your router ignore the animation part.

This would preserve both the back button as well as bookmarking! Note that this video is more of the hacking nature what fun things can we do with the URL bar, he stated a number of times that it's more cool than actually useful.

[1]: https://developer.mozilla.org/en-US/docs/Web/API/History_API...





Historical Discussions: How Apple, Google, and other tech companies conspired against their own workers (April 15, 2019: 717 points)

(717) How Apple, Google, and other tech companies conspired against their own workers

717 points 5 days ago by snowisgone in 3442nd position

www.whenrulesdontapply.com | | comments | anchor

Changing jobs is a common path to seek better pay and professional advancement. But this time-honored free market practice was secretly inhibited by top Silicon Valley tech executives who colluded through 'no-poach' agreements that suppressed the wages and opportunities of their employees.

The U.S. Department of Justice and the California Attorney General's office undertook legal action against Apple, Adobe, Intel, eBay, Pixar, Intuit, and Lucasfilm in a unique application of antitrust law in defense of labor rights.

A separate class action lawsuit was filed on behalf of 64,000 tech workers for an estimated $3 billion in lost wages. Before the case went to trial, the companies settled for a total of $435 million.




All Comments: [-] | anchor

wyldfire(626) 5 days ago [-]

> A separate class action lawsuit was filed on behalf of 64,000 tech workers for an estimated $3 billion in lost wages. Before the case went to trial, the companies settled for a total of $435 million.

How would we know if they resumed the bad behavior?

rak00n(4070) 5 days ago [-]

That's only $6796.87 per person.

cliffy(3827) 5 days ago [-]

The settlement is laughable given the number of people whose salaries were held down as a result of this collusion. It's not just workers at these companies that suffer. It has a network effect of holding salaries down at companies which aren't even colluding.

Another example of (usually giant) companies deciding to risk breaking the law because its cheaper in the long run to do so.

iheartpotatoes(10000) 4 days ago [-]

EXACTLY. I remember us engineers getting 0% raises for years in the mid 2000's when I worked at the Evil-i. Managers had to have meetings to explain to us it was only temporary due to the bubble bursting (options from '97 were underwater until practically 2011). And what, they throw a few $k at us and that's supposed to make up for it? Glad I got out of that toxic shithole. My peers who left in the 90's tried to encourage me to leave, but I thought I'd be stable and loyal. I remember around 1990-91 when former employees who had been 'wrongfully terminated' were protesting, and I was a just some mid-20something laughing at them for being slackers. Joke's on me, they were dead right.

taurath(4118) 5 days ago [-]

Let's call settlements what they are: Bribes. Sometimes a bribe can bring justice, but in cases like this where they've created a systemic problem, the knock-on effects are too large for a bribe to ever be true justice. Thats why people need to go to jail.

skybrian(1769) 5 days ago [-]

It was a long time ago. I got a raise out of it and later retired early, and I'm a pretty average software engineer, for Google anyway. Your argument that I 'suffered' when actually they treated me very well by almost any standard isn't going to work.

Based on what I've heard about current salaries for in-demand people, I think it's safe to say that any depressing effect on software engineers' salaries is long gone?

And if not, and this is what holding down salaries looks like, imagine what the housing market would be like without it.

gok(679) 5 days ago [-]

> Top executives of leading tech companies secretly agreed among themselves not to hire each other's employees

That's not accurate. The agreement was about recruiters cold-calling each other's employees. Employees were free to reach out on their own between companies. Workers routinely switched companies while the agreement was in effect.

I still think it was bullshit, but it wasn't quite as egregious as the article makes it sound.

Disclaimer: I was one of the affected employees, and got a share of the settlement.

elicash(4119) 5 days ago [-]

Recruitment is a vital part of the work companies do and they traditionally pour a ton of resources into that aspect of hiring. But yes, I agree specifying the deal was 'limited' to recruitment hiring would be better and more precise.

That shouldn't be used to diminish how vast of a deal this was. But I generally agree with you.

morganw(10000) 4 days ago [-]

A lot of the discovery was never redacted & released; it remains under seal. I think there was more going on like a courtesy 'we're going to hire your person. You want us to turn them down & earn a chit for the future?'.

arebop(10000) 5 days ago [-]

Your statement is also not accurate, because there were several variants of 'the agreement' and for some pairs of companies the agreement extended beyond cold-calling [https://www.lieffcabraser.com/antitrust/high-tech-employees/]. I'm sure multiple careers were set back when their prospective employer's HR department notified their current employer's HR department about the disloyalty.

drdeadringer(10000) 5 days ago [-]

I don't know how this is still a super secret unknown to the public at large. This has been reported for years. What am I missing?

Spoom(10000) 5 days ago [-]

> The project was made possible by a grant from the eBay Settlement Fund.

arebop(10000) 5 days ago [-]

These employees aren't very sympathetic victims. Nobody wants to hear about how some 5%-er with a bunch of good alternatives should be making more and having more choices about who to work for.

You'll even find in the comments here that many of the affected employees themselves feel they got at worst a fair deal, because after all some (most!) workers get paid less and work in harsher conditions and so forth.

turc1656(10000) 5 days ago [-]

Notice the difference: 'Top executives of leading tech companies secretly agreed among themselves not to hire each other's employees'

'...brought charges against the companies'

They should have been indicting both, or just the executives for their explicit role in this criminal behavior. I really detest the lack of individual accountability in all these cases when we deal with large companies. If this was a couple of small, local competitors who were the only two [whatever] in the area and they did this, I guarantee you the owners would be individually indicted.

jhanschoo(10000) 4 days ago [-]

Depending on their position and the law, it is possible that liability is limited so that they cannot be sued by external entities. That is, external entities can sue the companies, but only the company and shareholders and possibly the state can sue the execs.

dominotw(1936) 5 days ago [-]

> Top executives of leading tech companies secretly agreed among themselves not to hire each other's employees

How do they enforce this agreement?

Reminds me of Phoebous cartel agreeing to produce lightbulbs below certain lifetime. https://en.wikipedia.org/wiki/Phoebus_cartel

sneak(2955) 4 days ago [-]

Google's shareholders and board kept Eric Schmidt in a position of power for approximately a decade after his fraud against the staff was revealed.

macspoofing(10000) 5 days ago [-]

Meh. It really wasn't that big of a deal. They got caught and fined and that feels like an appropriate level of punishment. There are times when a prison-term against bad-actor executives is the right punishment ... this isn't it.

>If this was a couple of small, local competitors who were the only two [whatever] in the area and they did this, I guarantee you the owners would be individually indicted.

No. 100% NO. I guarantee you that the regulators wouldn't care. And if they did care, the 'local competitors' would get AT BEST a slap on the wrist like a strongly worded letter with a time-frame or MAYBE a fine (if that). There would be no indictments.

I work with regulators all the time (FDA, Health Canada), and they will work with you to get you under compliance. Consider the fact that all companies are under a large amount of legal and regulatory constraints and chances are every company is doing something against some law or some regulation (knowingly or unknowingly), and you can see that as long as what you did didn't result in undue hardship (though I'm sure you'll try to claim that this was an egregious action - I disagree) and you fix your behaviour, you'll be fine.

munk-a(4015) 5 days ago [-]

I agree, there is far too much ability to transfer liability in the modern world which lets this sort of stuff get abstract and easier to dismiss. Companies should be held accountable to the extent their growth has encouraged bad behavior, but individuals at that company committed the behavior and, if they were instructed to do so and blindly followed then...

1. 'Just following orders' isn't now and has never been a valid excuse

2. Go after the coercers or modify our understanding of liability to allow those targeted to make the argument that their superiors deserve to repay them every bit of penalty they owed.

Really, in the modern world, just sue someone and tell them they're on the hook but they'll be allowed to sue whoever is responsible for their portion of the responsibility and watch this all work itself out quickly.

justfor1comment(10000) 5 days ago [-]

Being a rich executive is essentially a get out of jail free card in America. Only good outcome for us commoners is that they can't screw us in the same way again. They have to invent a new way next time.

fulafel(3389) 4 days ago [-]

Was there a criminal case to be made?

samatman(3674) 5 days ago [-]

I would agree with both.

But bringing charges against a company is important. Boards of directors are supposed to, well, direct the executives, not look the other way while they break the law and abuse their employees.

The way to encourage this is to hit the company, hard, right in the wallet.

JDulin(3779) 5 days ago [-]

This is an important point, and I couldn't agree more.

This idea overwhelms me when I read the newest filings and indictments of pharmaceutical companies in the opioid crisis. A corporation, fundamentally, cannot 'learn lessons' like people do. It is a collection of incentives, with men inside directed or manipulated by those incentives. If gently nudging 47,000 people a year to kill themselves with overdoses would create more revenue for the corporation, after lawsuit settlements, than not, they would likely do it all over again. Even if you change the men making the decisions - 'Finding more moral men' is not a plan.

Executives must have skin in the game, because the possible upside to their career at the highest levels of American business are too great to hope they'll take a moral stand. The potential upside for Richard Sackler, and John Kapoor, and Steve Jobs is so high (Massive bonuses, stock prices), and the potential downside so low (They are fired, with a generous golden parachute), they are willing to take the chance they'll get away with it. The most likely outcome is that attorneys will get rich, and nothing much else.

The only solution to †his type of white collar crime and leadership malfeasance is to make executives, the individual human beings, feel a tinge of reptilian fear in their gut that they may go to prison and the livelihood of their families could be put in danger.

i_am_proteus(10000) 5 days ago [-]

Part of the reason modern corporate bureaucracies are so bloated and complicated is to reduce the personal risk to executives by muddying the apparent decision-making process at the top. Executives can make it look like they're people who execute complicated processes rather than decision-makers.

They end up taking credit when things go well, and they might look bad when things go poorly, but they essentially never get held criminally liable. Nice work if you can get it.

scottlegrand2(10000) 4 days ago [-]

I think the latest expression of this concept came out when a friend of mine was up for a senior role at Google and they ended consideration when they suddenly decided that he switched jobs too often to be a leader, but they were happy to continue considering him for a role beneath what he currently already had.

Nevermark(10000) 3 days ago [-]

Isn't that a reasonable concern?

The decision is specific to that one candidate and related directly to the candidates work history.

Is there no N, where N is the number of jobs someone has had in the last five years, where you would consider someone to be a questionable candidate for a key position?

Animats(1972) 5 days ago [-]

It should be possible to crunch through LinkedIn histories and determine where there's collusion between employers. If moves between employer A and employer B are lower than between A to C and C to B, something funny is going on.

stcredzero(3109) 5 days ago [-]

With something as complex as employment decisions, there could be a lot of confounding factors. Example: What if A and B are located somewhere more desirable? There may naturally be a desire to stay where you are and less impetus to move. A and B might be in a class of company which confers more prestige, so there would be less impetus to move. A and B might entail more risk somehow. There are many of these.

Using bulk statistics to show things like bias and collusion is fraught with these issues.

DINKDINK(3893) 5 days ago [-]

I wonder how much small firms/startups have /benefited/ from this arrangement. By artificially reducing demand for labor, big tech firms have effectively made it more profitable for smaller firms to compete against them.

rightbyte(10000) 5 days ago [-]

I think you got it the other way around. They have decreased the price of labour and thus increased the demand for it.

This is more or less an employer union.

ummonk(4069) 5 days ago [-]

That, along with regulatory enforcement, was basically what broke the collusion scheme. Facebook as a growing company did not want to play ball, and aggressively poached from the other tech companies with bigger offers, forcing them to raise their compensation to retain employees.

tbarbugli(3599) 5 days ago [-]

> estimated $3 billion in lost wages. Before the case went to trial, the companies settled for a total of $435 million

So you get busted doing something illegal and walk away with a 2.5Billion in your pocket and 0 jail time. How do you expect execs to not do this all the time?

skybrian(1769) 4 days ago [-]

We are talking about a hypothetical wage calculation with many unknowns versus a cash settlement. In truth, nobody knows what would have happened in the counterfactual case, but settlements require picking a number.

khalilravanna(10000) 5 days ago [-]

If the number of 64,000 employees is correct, even that $3B seems like a pittance. One commenter wrote their wage jumped by $100k the year after this settlement. $3B / 64k ~= $50k per person, less than half that number[1]. In the end they only had to pay an amount that was < $7k per person. What a joke.

My guess is the executives who orchestrated this got a big bonus as a result. And why wouldn't they? They optimized for the corporation's good. How do we collectively start incentivizing not being evil like this?

1. Sure it's only one data point to extrapolate off of but anecdotally hearing that people were paid $100-150k in SF while this was going on is a substantial amount of wage suppression given that those same people employed now would be making $300-500k at those same companies without blinking an eye.

keepper(4019) 5 days ago [-]

So while this is bad... I don't know how to feel about this. The other side of the coin is that these companies pay WAY ABOVE average salaries. I'm intimately familiar with this having

FAANG companies average of 50% more compensation than average. Especially outside of the bay area.

Typical range TC Sr. SDE in nyc: 150-250k + Bonus. Typical range TC for Sr. SDE in nyc: for Google/Facebook/Amazon 300-400k ( and goes higher ).

Only the most well paid Quants working in horrible environments made 500k+ There's a good portion >E5 engineers at these companies making well over that.

Anti-competition is bad.. but they are at the top of the industry on compensation. :( So again, don't know how to feel about this.

kjar(10000) 5 days ago [-]

This argument is very weak IMO. Paraphrased "I got paid happily, industry collusion to suppress wages is therefore OK"

defen(4084) 5 days ago [-]

> So while this is bad... I don't know how to feel about this.

Just to be clear: you are saying it's possibly ok for a cartel of billionaires to collude to suppress wages for a bunch of people making six figures? Because those employees have it 'pretty good' and should be happy with what they're given, even though they are working to build world-historical fortunes and power for the business owners?

samirillian(10000) 5 days ago [-]

Yeah, until you compare how much they spend on wages to their market cap, which is just insanely low relative to most other industries.

deogeo(3975) 5 days ago [-]

Now compare the wages against CEO and shareholder compensation.

dropit_sphere(10000) 5 days ago [-]

'Remember kids, profit is for capital, never labor. Know the difference!'

cobookman(3535) 5 days ago [-]

There is a lot of ageism in this industry. I'd be curious if SWE at FAANG is equivalent to football players in the NFL.

Aka, Sure you might make more but what's the earnings over total career?

fullshark(3749) 5 days ago [-]

Remember this happened before FB / Netflix / Unicorns started an open war for talent and blew salaries up.

ummonk(4069) 5 days ago [-]

You've got the timeline backwards. The compensation only went up after this collusion ended.

izzydata(10000) 5 days ago [-]

Who gets to judge how much an engineer is worth to a company making billion upon billions of dollars a year? If some of these engineers are what is enabling these companies and their executives to make millions then maybe they really are worth a 1 million dollar salary.

benologist(1015) 5 days ago [-]

These companies still routinely conspire against their own workers to slightly increase profits by reducing benefits, salaried roles, sick pay etc.

Nevermark(10000) 3 days ago [-]

And employees conspire to get more by gaining experience then moving to other companies, differentiating themselves and asking for raises?

You are describing negotiation.

Companies are not supposed to want to raise wages any more than employees are supposed to want to lower wages. The balance is how the market puts a value on different kinds of work and skills.

inlined(4017) 5 days ago [-]

One of the things that made my mind explode was that this was blatantly recommended in The Hard Thing About Hard Things. Horowitz argues that your relationship with your business partner is worth more than the employee and you shouldn't entertain solicitations for employment.

aswanson(3680) 5 days ago [-]

I mean, at least he kept it real.

munk-a(4015) 5 days ago [-]

This is why, as an employee, I am not willing to put my neck out for my employer - modern society and goings on are a pretty clear lesson that employees that take risks for a company end up eating them and nobody higher up cares (unless it's essentially free to care, which is more easily explained by self-interested behavior than any sort of altruism).

kjar(10000) 5 days ago [-]

Reminds me of the work "Disposable People"

Nevermark(10000) 5 days ago [-]

Balancing the risk/reward of poaching employees from a partner company makes sense.

This is not about backroom agreements to artificially keep wages down, but about weighing the pro's and con's any particular action which could derail important projects.

If the employee is worth more than the risks of poaching them, then do so. If not, it would be foolhardy to do so.

basetop(10000) 4 days ago [-]

Sadly, employees are just another resource ( human resource ) in the modern world.

thorwasdfasdf(10000) 5 days ago [-]

It's interesting the lengths they'll go to rip off their own workers, and yet they won't even make the slightest effort to hire in locations where labor is much cheaper. There's countless places in the US where they could've fairly paid 1/3 less. And significantly more savings outside the US.

chillacy(4119) 5 days ago [-]

A lot of companies have satellite offices in smaller cities with lower CoL. But they still want to be in urban centers with lots of people and next to strong CS colleges.

skybrian(1769) 5 days ago [-]

These companies have offices all over the world, including China and India.

nostrademons(1625) 5 days ago [-]

So I was at one of these companies when the scandal broke. I didn't get screwed quite so much as my coworkers, since I'd only been working there for a year or so. The settlement was a joke - I got about $1100, but my compensation increased by roughly $100K/year the year after the cartel broke, and kept rising. Can't say I'm terribly pleased about either the wage-fixing or the settlement, but...

My wife works in philanthropy, and one of her jobs is investing in homeless shelters. We were talking the other day about how the Bay Area's housing/homelessness crisis is a direct consequence of the collapse of the high-tech wage-fixing cartel. Before 2010, the wage distribution from one of these huge companies was that founders and VCs would make billions, ~1000 early employees would end up with millions, and the rest of the employees live comfortable upper-middle-class lifestyles. The ~1000 employees who could cash out pre-IPO stock options would bid up prices in Hillsborough/Atherton/PacHeights/Woodside to ~$5M, but the rest of the Bay Area would be priced at what an ordinary professional could afford. After the cartel broke, the compensation structure changed so we have ~100K engineers each making ~$300-400K/year. That's enough to buy all the available housing inventory in the region. So now house prices in Mountain View and Sunnyvale go from $800K -> $2.4M, and you must be a dual-tech-income family to afford a house.

I say this not to imply that the cartel was a good thing (cartels are bad, and I'd much rather the solution be greater wage equality for everyone and building more housing so everyone can stay in the area), but to highlight the problem of unintended consequences. I've seen many people ask 'Why don't companies hire remote workers at Silicon Valley wages?' and in the same breath say 'Because I would never move to the third-world hellhole that the Bay Area has become', not realizing that if they did hire remote workers, the same thing would happen in their communities. Inflation is the flip side of higher wages; when everybody gets paid more, everything costs more.

chillacy(4119) 5 days ago [-]

Just a nit: Inflation is the result of too much money chasing too few goods, so not everything increases in price. Electronics and mass consumer goods typically stay the same price. Only goods which have limited supply and no substitution get more expensive, like housing.

Otherwise everything else makes sense.

pathseeker(10000) 5 days ago [-]

>So now house prices in Mountain View and Sunnyvale go from $800K -> $2.4M,

This just proves that the housing crises was the same before. A regular middle-class income cannot afford an $800k home. The problem is the same before and after: there isn't enough supply for people who want homes.

Until you look at sales numbers and supply, your analysis is meaningless. Increased wages don't make housing prices increase.

>not realizing that if they did hire remote workers, the same thing would happen in their communities.

Again, this is bullshit. Google could hire 10,000 remote employees at SV wages in somewhere like the Detroit area and it would have a negligible impact on housing prices because the supply is so large.

Remember, techies still make up a tiny percentage of the bay area population. 1% of the population being flush with cash should have absolutely zero impact on a healthy housing market.

heavyset_go(10000) 4 days ago [-]

> So now house prices in Mountain View and Sunnyvale go from $800K -> $2.4M, and you must be a dual-tech-income family to afford a house.

Weird that this is a problem in Vancouver, Toronto, etc. The larger trend that you're ignoring is that global capital finds real estate in these areas to be good investments compared to their domestic options for storing value.

Instead of competing locally or domestically for real estate, people in the US, Canada, etc are competing with the richest people on the planet for a slice of the pie.

JDiculous(4060) 4 days ago [-]

> if they did hire remote workers, the same thing would happen in their communities.

That's nonsense. There's a severe shortage of housing in Silicon Valley despite the fact that there's an abundance of housing in most of the country. Hiring remote workers means that you're hiring all around the country/world and people can move, not hiring in any one community. Also, hiring remotely means you can pay much less than Silicon Valley compensation because landlords aren't leeching an enormous bulk of it.

Silicon Valley could solve its housing problem tomorrow if it fixed its ultra-restrictive zoning laws, ended Prop 13, and built a ton of apartments. It won't do that because the people in power place the interest of wealthy landlords above that of the average people who live there.

jplayer01(10000) 5 days ago [-]

But this wouldn't have happened in the first case if the executives hadn't made this cartel. These are just second-order consequences of their actions - what might have happened over the course of two decades happened within 1 or 2. But this isn't the government's fault, this is a market imbalance caused directly by Google et al. acting in bad faith in the first place.

gcbw2(4112) 5 days ago [-]

You are widely assuming that because a pie was split to 10 people instead of one, there are more money buying houses. That's plain wrong. There is still a single pie. Houses were bought just the same, but by one person instead of 10. I don't know how much more first home purchases facilitate this when there are 10 people instead of one, but I doubt it had any impact since those folks are buying more than one house themselves anyway.

In the end, the only thing that prevents housing prices/bubbles, is denser desirable residential areas.

There might be other factors, but one is definitely NOT money distribution schemes from the top 1% to the top 10%.

flukus(3935) 4 days ago [-]

> We were talking the other day about how the Bay Area's housing/homelessness crisis is a direct consequence of the collapse of the high-tech wage-fixing cartel.

Inflated housing costs are a worldwide phenomenon, you'll see a very similar story in Sydney, London, Tokyo and other places so I very much doubt the root cause is anything to do with the bay area. It might be worse there given the amount of money flowing in, but it's a much bigger problem.

ChuckMcM(654) 5 days ago [-]

Another (minor) nit, in my experience you have always needed to be a dual tech income to afford a house in the cities near the bay. My wife and I bought our first house in 1984 for $153,000 and it required us to both be working well paid tech jobs. You can also buy a house usually if you have a lucky equity break and can use that to pay big chunk down so the mortgage is affordable on a single income but as far as I can tell buying a house on a single engineer's income, that is near the center of activity, hasn't been true since the early 70's or so.

hopler(10000) 4 days ago [-]

The reason everyone at Apple and Google got a massive pay rise after 2010 is that Zuckerberg didn't join the cartel and Facebook turned on a firehose of money in SV. For employees of Apple and Google. Even of you never worked for Facebook, you owe Zuckerberg thanks for your raises.

deepakhj(10000) 4 days ago [-]

'In the early 1960s, CA's population was 15 million & we built 250K-300K homes/year.

Today, CA's population is 40 million & we build 80K homes/year.

So our population nearly tripled while housing production dropped by over 2/3.

And people wonder why housing is so expensive.'

pertymcpert(10000) 5 days ago [-]

It's just exposed and accelerated an underlying issue in housing policy. When demand goes up supply is also supposed to go up to meet demand in a healthy market, but because of artificial supply restrictions this is what happened. The government is to blame.

zzzzzzzza(10000) 4 days ago [-]

how much do you pay for coca cola? If the price has gone up, it's probably not nearly as much as real estate has (probably more in line with what real inflation has been). And that's because real estate is a natural monopoly... luckily there's a solution, read Henry George: land value taxes.

mleonhard(3368) 3 days ago [-]

Here's an email I sent to my city councilperson on 2017-12-01. It contains information that refutes your claim. I received no response:

Dear Aaron Peskin, I live in Supervisorial District 3, at ... . You represent me and my neighbors in your position on the SF Board of Supervisors. Your work is very important. I have lived in SF for 6 years and have seen the housing market get more and more expensive while the city's businesses have grown. Economically, the city is prospering and adding many jobs which bring workers to the city every day. Unfortunately, the city has not added enough housing for these people. Here are the numbers:

- In 2011, SF jobs increased 2% (see [1] page 5) and housing increased 0.07% (see [6] page 4).

- In 2012, SF jobs increased 5% (see [2] page 5) and housing increased 0.4% (see [7] page 4).

- In 2013, SF jobs increased 5% (see [3] page 5) and housing increased 0.9% (see [8] page 4).

- In 2014, SF jobs increased 5% (see [4] page 5) and housing increased 1% (see [9] page 5).

- In 2015, SF jobs increased 5% (see [5] page 5) and housing increased 1% (see [10] page 5).

The 2016 reports should appear at [11] when they are available.

I took Econ 101 in college and learned about the Law of Supply and Demand. It's clear to me that SF's housing crisis is caused by demand growing much faster than supply. The city has enacted many policies and programs to boost the economy, creating more jobs. It has not done enough to make places for all of those workers to live. It's an imbalance of epic proportions.

Fixing this situation is the responsibility of the San Francisco Board of Supervisors. You are our elected member of the board. Please tell me what you are doing to increase the housing supply.

Sincerely, Michael

[1] http://sf-planning.org/sites/default/files/FileCenter/Docume...

[2] http://default.sfplanning.org/publications_reports/Commerce_...

[3] http://default.sfplanning.org/publications_reports/Commerce_...

[4] http://default.sfplanning.org/publications_reports/Commerce_...

[5] http://default.sfplanning.org/publications_reports/2015_Comm...

[6] http://www.sf-planning.org/ftp/files/publications_reports/20...

[7] http://sf-planning.org/sites/default/files/FileCenter/Docume...

[8] http://default.sfplanning.org/publications_reports/Housing_I...

[9] http://www.sf-planning.org/ftp/files/publications_reports/20...

[10] http://default.sfplanning.org/publications_reports/2015_Hous...

[11] http://sf-planning.org/citywide-policy-reports-and-publicati...

DontGiveTwoFlux(3961) 5 days ago [-]

Can any member of the class lawsuit here comment on the settlement? How did that work out for you?

throwawwway(10000) 5 days ago [-]

I was a member of the class. After the settlement went through, I remember receiving a few large booklets in the mail from a law firm explaining the settlement and offering the opportunity to opt out.

After what seemed like a really long time - at least a year - I received a check for about $5000, which I had to pay taxes on, so it ended up being around $3000 net. I had left the company by that time, but still owned a substantial amount of stock, so I wouldn't be surprised if I ended up losing more from the negative impact on the stock price than I got in the settlement.

The people who really came out ahead in this were the lawyers, who took something like 25% of the settlement for themselves.

dnr(4118) 5 days ago [-]

I got some stuff in the mail, and eventually two checks for the two phases. I think they were something like $1100 and $7000.

I was kind of annoyed because the initial settlement was apparently so embarrassing that the _judge_ rejected it, something like $330M on supposedly $3B in lost wages. So they went back and increased it to the $435M, still nowhere close to what it should have been. Of course at this point the lawyers' incentives and the class' incentives are not aligned at all, so that's how it ended.

shereadsthenews(10000) 5 days ago [-]

Not a member of that class but one of the things that happened as a result of these revelations was that everyone at Google got a large raise (much larger than typical annual raises) and a substantial one-time cash bonus which appeared to be, during the announcement, just Eric being Uncle Moneybags but just after the all-hands we all found out the reason from the newspapers.

ChuckMcM(654) 5 days ago [-]

The critical thing here is that if pay was uncontrolled, then skilled employees could market themselves to the highest payer, and that would put more of the profits of the work of those employees into employee pay instead of into company margin (aka profit).

It is pretty clear when you look at the revenue per employee numbers at some of these 'digital product' companies (specifically Google and Facebook), that rank and file employees are not sharing equally with management in the returns.

That is not to say that pay is bad, or that wage theft is going on (extreme positions), employees in California at least are at will and can quit at any time to offer their services to another player. The regulatory requirement though is to identify and prevent collusion between those players in keeping salaries low.

nathanvanfleet(10000) 5 days ago [-]

> That is not to say that pay is bad, or that wage theft is going on (extreme positions), employees in California at least are at will and can quit at any time to offer their services to another player. The regulatory requirement though is to identify and prevent collusion between those players in keeping salaries low.

Wait, but isn't this situation actually making them getting hired at other players much more difficult due to the collusion?

pathseeker(10000) 5 days ago [-]

>equally with management in the returns.

Managers at Google don't make much more and in some cases don't make more than the SWEs they manage. Did you mean to say shareholders?





Historical Discussions: Why software projects take longer than you think – a statistical model (April 16, 2019: 676 points)

(683) Why software projects take longer than you think – a statistical model

683 points 4 days ago by mzl in 2550th position

erikbern.com | Estimated reading time – 13 minutes | comments | anchor

Why software projects take longer than you think – a statistical model

2019-04-15

Anyone who built software for a while knows that estimating how long something is going to take is hard. It's hard to come up with an unbiased estimate of how long something will take, when fundamentally the work in itself is about solving something. One pet theory I've had for a really long time, is that some of this is really just a statistical artifact.

I suspect devs are actually decent at estimating the *median* time to complete a task. Planning is hard because they suck at the *average*.

— Erik Bernhardsson (@fulhack) May 11, 2017

Let's say you estimate a project to take 1 week. Let's say there are three equally likely outcomes: either it takes 1/2 week, or 1 week, or 2 weeks. The median outcome is actually the same as the estimate: 1 week, but the mean (aka average, aka expected value) is 7/6 = 1.17 weeks. The estimate is actually calibrated (unbiased) for the median (which is 1), but not for the the mean.

A reasonable model for the "blowup factor" would be something like a log-normal distribution. If the estimate is one week, then let's model the real outcome as a random variable distributed according to the log-normal distribution around one week. This has the property that the median of the distribution is exactly one week, but the mean is much larger:

If we take the logarithm of the blowup factor, we end up with a plain old normal distribution centered around 0. This assumes the median blowup factor is 1x, and as you hopefully remember, log(1)=0. However, different tasks may have different uncertainties around 0. We can model this by varying the σ parameter which corresponds to the standard deviation of the normal distribution:

Just to put some numbers on this: when log(actual / estimated) = 1 then the blowup factor is exp(1) = e = 2.72. It's equally likely that a project blows up by a factor of exp(2) = 7.4 as it is that it completes in exp(-2) = 0.14 i.e. completes in 14% of the estimated time. Intuitively the reason the mean is so large is that tasks that complete faster than estimated have no way to compensate for the tasks that take much longer than estimated. We're bounded by 0, but unbounded in the other direction.

Is this just a model? You bet! But I'll get to real data shortly and show that this in fact maps to reality reasonably well using some empirical data.

Software estimation

So far so good, but let's really try to understand what this means in terms of software estimation. Let's say we look at the roadmap and it consists of 20 different software projects and we're trying to estimate: how long is it going to take to complete all of them.

Here's where the the mean becomes crucial. Means add, but medians do not. So if we want to get an idea of how long it will take to complete the sum of n projects, we need to look at the mean. Let's say we have three different projects in the pipeline with the exact same σ=1:

Median Mean 99%
Task A 1.00 1.65 10.24
Task B 1.00 1.65 10.24
Task C 1.00 1.65 10.24
SUM 3.98 4.95 18.85

Note that the means add up and 4.95 = 1.65*3, but the other columns don't.

Now, let's add up three projects with different sigmas:

Median Mean 99%
Task A (σ=0.5) 1.00 1.13 3.20
Task B (σ=1) 1.00 1.65 10.24
Task C (σ=2) 1.00 7.39 104.87
SUM 4.00 10.18 107.99

The means still add up, but are nowhere near the naïve 3 week estimate you might come up with. Note that the high-uncertainty project with σ=2 basically ends up dominating the mean time to completion. For the 99% percentile, it doesn't just dominate it, it basically absorbs all the other ones. We can do a bigger example:

Median Mean 99%
Task A (σ=0.5) 1.00 1.13 3.20
Task B (σ=0.5) 1.00 1.13 3.20
Task C (σ=0.5) 1.00 1.13 3.20
Task D (σ=1) 1.00 1.65 10.24
Task E (σ=1) 1.00 1.65 10.24
Task F (σ=1) 1.00 1.65 10.24
Task G (σ=2) 1.00 7.39 104.87
SUM 9.74 15.71 112.65

Again, one single misbehaving task basically ends up dominating the calculation, at least for the 99% case. Even for mean though, the one freak project ends up taking over roughly half the time spend on these tasks, despite all of these tasks having a similar median time to completion. To make it simple, I assumed that all tasks have the same estimated size, but different uncertainties. The same math applies if we vary the size as well.

Funny thing is I've had this gut feeling for a while. Adding up estimates rarely work when you end up with more than a few tasks. Instead, figure out which tasks have the highest uncertainty – those tasks are basically going to dominate the mean time to completion.

I have two methods for estimating project size: (a) break things down into subprojects, estimate them, add it up (b) gut feeling estimate based on how nervous i feel about unexpected risks So far (b) is vastly more accurate for any project more than a few weeks

— Erik Bernhardsson (@fulhack) March 8, 2019

A chart summarizes the mean and 99th percentile as a function of the uncertainty (σ):

There is math to this now! I've started appreciating this during project planning: I truly think that adding up task estimates is a really misleading picture of how long something will take, because you have these crazy skewed tasks that will end up taking over.

Where's the empirical data?

I filed this in my brain under "curious toy models" for a long time, occasionally thinking that it's a neat illustration of a real world phenomenon I've observed. But surfing around on the interwebs one day, I encountered an interesting dataset of project estimation and actual times. Fantastic!

Let's do a quick scatter plot of estimated vs actual time to completion:

The median "blowup factor" (actual time divided by estimated time) turns out to be exactly 1x for this dataset, whereas the mean blowup factor is 1.81x. Again, this confirms the hunch that developers estimate the median well, but the mean ends up being much higher.

Let's look at the distribution of the blowup factor. We're going to look at the logarithm of it:

You can see that it's pretty well centered around 0, where the blowup factor is exp(0) = 1.

I'm going to get a bit fancy with statistics now – feel free to skip if it's not your cup of tea. What can we infer from this empirical distribution? You might expect that the logarithms of the blowup factor would distribute according to a normal distribution, but that's not quite true. Note that the σs are themselves random and vary for each project.

One convenient way to model the σs is that they are sampled from an inverse Gamma distribution. If we assume (like previously) that the log of the blowup factors are distributed according to a normal distribution, then the "global" distribution of the logs of blowup factors ends up being Student's t-distribution.

Let's fit a Student's t-distribution to the distribution above:

Decent fit, in my opinion! The parameters of the t-distribution also define the inverse Gamma distribution of the σ values:

Note that values like σ>4 are incredibly unlikely, but when they happen, they cause a mean blowup of several thousand times.

Why software tasks always take longer than you think

Assuming this dataset is representative of software development (questionable!), we can infer some more numbers. We have the parameters for the t-distribution, so we can compute the mean time it takes to complete a task, without knowing the σ for that task is.

While the median blowup factor imputed from this fit is 1x (as before), the 99% percentile blowup factor is 32x, but if you go to 99.99% percentile, it's a whopping 55 million! One (hand wavy) interpretation is that some tasks end up being essentially impossible to do. In fact, these extreme edge cases have such an outsize impact on the mean, that the mean blowup factor of any task ends up being infinite. This is pretty bad news for people trying to hit deadlines!

Summary

If my model is right (a big if) then here's what we can learn:

  • People estimate the median completion time well, but not the mean.
  • The mean turns out to be substantially worse than the median, due to the distribution being skewed (log-normally).
  • When you add up the estimates for n tasks, things get even worse.
  • Tasks with the most uncertainty (rather the biggest size) can often dominate the mean time it takes to complete all tasks.
  • The mean time to complete a task we know nothing about is actually infinite.

Notes

  • This is obviously just based on one dataset I found online. Other datasets may give different results.
  • My model is of course also highly subjective, like any statistical model.
  • I would ❤️ to apply the model to a much larger data set to see how well it holds up.
  • I assumed all tasks independent. In reality they might have a correlation which would make the analysis a lot more annoying but (I think) ultimately with similar conclusions.
  • The sum of log-normally distributed value is not another log-normally distributed value. This is a weakness with that distribution, since you could argue most tasks is really just a sum of sub-tasks and it would be nice if our distribution was stable like that.
  • I removed small tasks (estimated time less than or equal to 7 hours) from the histogram since small tasks skew the analysis and there there was an odd spike at exactly 7.
  • The code is on my Github, as usual.
  • There's some discussion on Hacker News and on Reddit.



All Comments: [-] | anchor

SiempreViernes(2870) 4 days ago [-]

Fits symmetric function to clearly asymmetric distribution

Author: Decent fit, in my opinion!

This bad fit makes me genuinely sad (;∩;)

ben509(10000) 4 days ago [-]

He could probably tweak a skew normal distribution to make it fit nicely, but it's pretty close to normal.

iraldir(3974) 4 days ago [-]

I believe this is the reason why scrum uses story points instead of time estimate. By putting uncertainty on the same level as effort, you give it more weight. And using a fibonacci sequence rather than a continuous amount with the rule you should round up if unsure tend to correct those defects.

js8(10000) 4 days ago [-]

Yeah, that's the same reason to use pseudoscience, it's a longer word with 'science' in it, which means - more science!

jon-wood(3653) 4 days ago [-]

I push back pretty hard on story points whenever I see them. They're not useful until several iterations in, once you have a baseline for how many points can be delivered in an iteration, and even then one developer's 1 point story is another's 5 point.

When estimating I tend to use a coarse scale of hours, days, weeks, and (in extreme cases) months. All estimates then get turned into a range between the minimum and maximum the estimate could mean - for example hours becomes 1 hour to 8 hours, days becomes 1 day to 5 days, and so on.

Anything estimated as weeks or months probably needs scoping until it can be broken down into tasks that will take between an hour and a week.

I find this method works well because it's intuitive, comes out with a suitable amount of uncertainty relative to the size of the task, and finally results in estimates with units that are useful to project managers and the wider business in scheduling other work needed to get a feature into production.

IshKebab(10000) 4 days ago [-]

Yeah in my experience everyone just mentally translates to time anyway so it is pointless. '11 points, that's about 2 days right?'

If you want to make the uncertainty clear it's better to allow ranges, like 2h-5d instead of trying to play stupid mind games.

Story points are like setting your watch 5 mins early so you are on time to things. It doesn't work.

jfoutz(3874) 4 days ago [-]

there is an uncharitable response to this thread, which is marked dead. I do think there is a kernel of truth in that response, which is nothing more than mocking.

why fibonacci? I think it's reasonable to say time includes an exponential error. an estimated 1 hour task is very different than an estimated 1 week task. i see that 1 hour, 1 day and 1 week estimates are progressively, and likely exponentially, worse.

Is it just the ease of doing the math? (totally reasonable answer, in my humble opinion). or is there something specific about fibonacci that's actually relevant? I think it's the former not the latter. but if you have any evidence to the contrary, i'd love to hear it.

StreamBright(2659) 4 days ago [-]

It is very easy to answer. Different people have different efficiency, therefore, a task completion does not depend solely on its complexity but also on who is executing it.

You should think about it (even though there are no actual physical units you can use to make this work):

Time to deliver a task = complexity / developer efficiency

This is why it is hard to develop a method to estimate time for development tasks.

brianpgordon(3969) 4 days ago [-]

If regular time estimation has tricky statistical properties when combining multiple estimates, story point estimation is just hopeless. Especially on a Fibonacci scale. When estimating, nobody thinks that an 8-point story is the same as eight 1-point stories. Even the metric itself is nonlinear!

wellpast(3684) 4 days ago [-]

> Instead, figure out which tasks have the highest uncertainty – those tasks are basically going to dominate the mean time to completion.

From the technical side of things, uncertainty can mean a few things here:

(A) I've never done this kind of task (or I don't remember or didn't write down how long this task took in the past)

(B) I don't know how to leverage my historic experience (e.g., implementing an XYZWidget in React and implementing the same widget in Vue or Elm for some reason take different amounts of time)

Considering (A)... Rarely does a seasoned developer in the typical business situation encounter technical tasks that are fundamentally different than what has been encountered before. Even your bleeding-edge business idea using modern JS' + GraphQL' is still going to be built from the same fundamental pieces as your 1999 CRUD app using SOAP and the estimates are going to be the same.

If you disagree with this you are in the (B) camp or you haven't done the work to track your estimates over time and see how ridiculously accurate estimates can be for an experienced practitioner. Even 'soft tasks' like 'design the widget' are estimable/repeatable.

This whole you-can't-estimate-software accuracy position is entirely a position of inexperience. And of course all bets are off there. You are talking about estimating learning in this case, not doing. And the bets are especially off if you aren't modeling that these are two different activities: learning and doing.

kansface(4034) 4 days ago [-]

> This whole you-can't-estimate-software accuracy position is entirely a position of inexperience.

There are other sources of uncertainty. NASA is probably the only organization that fully specifies product requirements before building stuff - often, even the customer doesn't know. Some problem domains preclude you from knowing in advance! When building the thing, complete understanding of the existing code, frameworks, libraries, and infrastructure is not possible. All of these abstractions eventually leak and break down at scale. Every one of these parts is also continually changing! Even if you could solve all of those problems, you'd still have to deal with people problems - incentives never align, budgets change, and alliances shift.

I don't have a formal proof, but I'm strongly suspicious that the only actual way to know how long something takes to build is to actually build it, which of course, may be impossible.

caseymarquis(3998) 3 days ago [-]

If the task is to reverse engineer an undocumented binary communication protocol to replace a poorly implemented proprietary file server, building the UI isn't the portion with high levels of uncertainty. I think the advice was good.

afarrell(4036) 4 days ago [-]

> is entirely a position of inexperience

There are a lot of inexperienced software engineers and very little good guidance written for them. What is a new CS grad to do when asked for an estimate? How can a new grad learn to produce accurate estimates within 3 months?

hnzix(4027) 4 days ago [-]

My rule of thumb: take your estimate, double it, then add 20%. I'm not joking.

PeterisP(10000) 4 days ago [-]

An approach that gives very similar results to yours but is more scientific is to multiply the initial estimates by e (2.718). Or, if you're conservative, then by pi.

stronglikedan(10000) 4 days ago [-]

If that 20% isn't for project management, then add another 10-20% for that, based on experience from previous projects for the same stakeholders. My similar rule of thumb is to multiply my estimate by 2.5, before adding the project management percentage. Of course, if it's something we've done before, it's more along the lines of 1.5 to 2.

thatoneuser(10000) 4 days ago [-]

Eh. I've been successful adding 20% in. I feel like if you have to double first (meaning your end result is 220% of what you originally estimated) then you aren't learning from previous mistakes. Maybe 220 is appropriate for the first time you do work or work with a certain team tho.

d--b(4031) 4 days ago [-]

Or you could just multiply by 2.4 ...

I used to do 2.5, I read somewhere 2.5 was fairly common

maxxxxx(3988) 4 days ago [-]

My number is 5. Maybe my estimates tend to be a little optimistic :-). But in general I can look at something for a few minutes, make a quick estimate (maybe ask somebody else), multiply by 5 and be pretty close to the eventual time it will take.

XorNot(10000) 4 days ago [-]

Triple it. Then triple that if to do it you have to go across team boundaries.

jspash(10000) 4 days ago [-]

A professor of mine at university taught us the same thing. It's one of the most valuable things I learned and it is uncannily accurate. Even after 30 years in industry I still fall back on this formula with great results.

Sadly, managers want to believe that you put more effort into estimating deadlines so I'll just whip up a gannt chart retroactively based on the 2/20 rule and they're happy.

leethargo(4014) 4 days ago [-]

I know it a little different, where you double it, but then a fixed offset, say 1 week, rather than 20%.

philpem(4114) 4 days ago [-]

Your boss just called. They said to tell you to stop padding your estimates...

CraneWorm(1887) 4 days ago [-]

I multiply my estimates by the number of people involved.

JustSomeNobody(3879) 4 days ago [-]

I close to double mine, but it also depends on the tasks. Some tasks I just know I'll complete in 1X my estimate. I do have a rule that nothing is ever estimated at less than 2 hours; no matter how small the task. Invariably, any time I have broken that rule, someone has checked in code that won't build (or something similar) and I have to spend time dealing with that.

I notice some devs like to underestimate to make them look more productive. Most of those tend to spend a lot of time fixing bugs that QA pushes back to them. They sometimes spend ever more time arguing that it's not their bug.

neilwilson(4115) 4 days ago [-]

I've worked on twice my gut feel for years. It's generally pretty near the mark.

Humans are optimistic by nature. Even somebody as pessimistic as me.

jsight(4015) 4 days ago [-]

Oh, the Scotty Factor approach to estimating?

fouc(3970) 4 days ago [-]

multiplying by 3 seems to cause the scenario of coming in under deadline and looking like a hero.

onion2k(2257) 4 days ago [-]

This is why I like 3 point estimation[1] - if you have optimistic, expected and pessimistic estimates for each task you can pull out which points are high risk. Using a single estimate can't give you that insight.

[1] https://en.wikipedia.org/wiki/Three-point_estimation

vbuwivbiu(10000) 4 days ago [-]

manager: 'thanks for the optimistic estimate!'

jon-wood(3653) 4 days ago [-]

I've had a lot of success using this method, but it does take having management that understand how software development works. A really good PM will look at an estimate like this as a range, with work complete somewhere between the lower and upper bounds. A bad one will take the optimistic estimate as gospel and build everything else around that.

jermaustin1(3333) 4 days ago [-]

In my experience software takes longer to build than original estimates because no one will get out of the way of the development team and let them work.

This is an extreme example, but one I now live in daily.

My current full-time-ish gig is working on a pretty enterprisy system for law enforcement. To this date there hasn't been a single feature request, or bug fix that took more than 16 hours of development time. And so I know that I can typically finish something within a few hours to a day of receiving the task. UNLESS my manager wants to discuss ad nauseam what he means when he says 'intersect an array'. Or get stuck in 2 day long code reviews where my manager makes me sit behind him while he goes over every single line of code that changed, then gets side tracked and starts checking emails, chat messages, text messages, calling other developers in to check on their statuses, and even watching youtube... while I'm stuck in his office waiting on my code review to be done so I can go back to my 5th day of trying to complete a task that would have taken only a couple of uninterrupted hours. /rant

And this is why I pay $120 a week for therapy.

stronglikedan(10000) 4 days ago [-]

I give estimates in hours. They're really by days for large projects and half days for smaller projects, but I express them in hours. I then provide a breakdown of the hours the team has been able to spend to date, as well as the hours that each forced interruption took. That way, when my manager asks why he perceives us as being behind on the project, I can tell him exactly how long his pointless meetings and changing priorities have delayed the team. It's also a good CYA technique for when he tries to blame the team when his manager asks why the project is delayed.

dboreham(3656) 4 days ago [-]

I find it useful to point out in situations like this that we have already spent more time discussing the issue, or often 'how not do to the thing' than it would take to simply do it.

pysxul(10000) 4 days ago [-]

Sorry but why would a manager even do a code review?

JabavuAdams(1737) 4 days ago [-]

My empirically-confirmed heuristic is that the time to deliver a feature set that someone would actually want to use is 2.5x-3x of the time I think of when asked for an off-the-cuff estimate.

Basically, multiply initial estimate by a number between e and pi -- no joke! It's a bit of a problem given that PM's think they're being generous with a 20% pad.

rainhacker(3278) 4 days ago [-]

I can relate to your heuristic. The current project I've been working on is close to completion and somewhere between 2.5-3x of its initial estimate.

teddyh(2414) 4 days ago [-]

According to Joel Spolsky1, programmers are generally bad at estimating, but they are consistently bad, with the exact factor depending on the individual. So by measuring each person's estimate and comparing it to the actual time takes after the fact, you can determine each person's estimation factor, and then when they estimate again, you can get a pretty reliable figure.

1. https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...

rightbyte(10000) 4 days ago [-]

'As estimators gain more experience, their estimating skills improve. So throw away any velocities older than, say, six months.'

I don't know how Spolsky is (was) as a boss, but it's is way easier to just adjust your time reports according to the estimates and make your boss happy than to actually try for no benefits to either you or the project. As long as the estimation errors are white noise-ish and in linear relation to actual task length, which is the assumption for his model to work, manipulated time reporting and not manipulated make no difference.

I've noticed that when programmers are introduced to Agile they initially are bad at time estimates until they learn that you actually can report time according to the estimates and no-one will ever notice. Extra points for looking on the burndown chart before reporting time to make it smooth.

'Some developers (like Milton in this picture) may be causing problems because their ship dates are so uncertain: they need to work on learning to estimate better. Other developers (like Jane) have very precise ship dates that are just too late: they need to have some of their work taken off their plate.'

Milton is probably doing fair estimates. When planning four months into the future you probably can say where you are within 2 months with 50% probability (or what ever boxplot he uses) of where you think you will be at that time. Janes reporting is obviously manipulated to match the estimates.

tonyedgecombe(3892) 4 days ago [-]

You know that article was written to sell a feature in their bug tracker. I like Joel's writing but I'd take that piece with a pinch of salt.

andybak(1999) 4 days ago [-]

I once had a really convoluted metaphor for estimation which involved opening boxes that sometimes contained other boxes which sometimes contained other boxes... I wonder how that models mathematically.

andy_ppp(4017) 4 days ago [-]

The problem with this analogy is that the boxes do not obey the laws of physics and fit inside eachother...

joe_the_user(3692) 4 days ago [-]

Well, in the most extreme form, you'd wind-up with something like the poisson distribution [1], where your chance of finishing each week would be the same.

This is not entirely implausible but it produces effects even more paradoxical/pathological than log-normal. Here, you could give a correct estimate of the expected time for a project to complete as 50 weeks. You could then reach week fifty and again give a correct estimate of 50 more weeks required. And then you could finish the next week.

[1] https://en.wikipedia.org/wiki/Poisson_distribution

dre85(10000) 4 days ago [-]

I'm blown away on a regular basis by how long it takes to write software that in my opinion is super easy.

Usually it just comes down to fuzzy/missing/changing requirements. A lot of times people know that they want something, but they don't know exactly what. Or they're absolutely sure they need feature x, but then later they realize they don't, but they missed out on developing other more fundamental features.

adrianmonk(10000) 4 days ago [-]

I have developed a belief about this: people don't know what they want until you show them what they said they want. Then it's immediately obvious to them what they wanted instead.

This suggests that demos and mock-ups might be a valuable tool. The sooner you can get someone to try something, the sooner they can tell you what direction they really wanted you to go in instead, and the less time you waste.

acd(4028) 4 days ago [-]

US navy has developed something similar with beta statistical distribution. You estimate 'Optimistic', 'Most likely' and 'Pessimistic' time estimates for each task in the project and then use beta distribution on it. Some tasks take way longer than estimated.

Here is the link to the time estimation described above with Beta distribution. https://www.isixsigma.com/methodology/project-management/bet...

maltalex(3616) 4 days ago [-]

I find this approach very interesting, but it hinges on the assumption that project completion times follow a beta distribution. What's the basis for that?

mattsfrey(10000) 4 days ago [-]

I've seen a similar model that I've adopted for my own projections which is 30/60/90 - the 30% chance scenario which is highly successful but less likely, 60% chance scenario that is good with a relatively decent chance of success, and the 90% chance scenario that is the lowest outcome but will almost certainly happen.

natorion(4022) 4 days ago [-]

You might also know this as PERT (https://de.wikipedia.org/wiki/Program_evaluation_and_review_...)

I used it in the past and I think it is a good framework to discover uncertainty and make it more visible, because it makes you talk about optimistic and pessimistic cases.

The estimates are also good enough to come up with a draft schedule.

rorykoehler(3462) 4 days ago [-]

I do this but only with best and worst case. Any estimate with a worst case of more than 5 days need to be broken down again unless it's a known quantity. Thanks for the link, I had no idea this was part of 6 sigma

arendtio(10000) 4 days ago [-]

Well, I think this is a complex topic. Not because of the math, but because the key to an accurate estimate is to understand who has done the estimate and on what basis.

As stated in the summary, the core driver for inaccurate estimates is uncertainty:

> Tasks with the most uncertainty (rather the biggest size) can often dominate the mean time it takes to complete all tasks.

There are different sources of certainty:

- Experience: If someone has done a task 20 times he probably knows how much time he will require. Someone who hasn't done the task yet, probably underestimate the time he requires (e.g. because of the median vs. mean conflict). But don't be fooled just because you have 20 years of work experience in the field but never actually done a specific task doesn't mean you can estimate it better than someone who just started the job but had done that specific task 10 times. However, most of the time, projects are doing something new. So you have to find out which tasks have been done before by a project member and which are completely new ground. If something is completely new, remember to plan time for getting familiar with a problem space plus a handful of complications (together this will be more than the actual task would take someone who is trained for that specific task).

- Detail: The smaller the tasks the larger the overall estimate... or so. Planning on a top-level is rarely going to be accurate. We do it a lot because it doesn't take much time. But if you want an accurate estimate, you have to plan on small, specific tasks.

- Risk management: Every project has risks. Some don't really have an impact and others blow up the whole project. Know your risks and what you are going to do if something should go in the wrong direction. It is not like you wouldn't have time to figure out what to do when the problem occurs but to understand how it would impact your timing and to take preventive actions (e.g. include stakeholders).

If you have people who have done the exact same task a few times, made a detailed plan of every step and know how to handle the most likely or impactful risks you are in a good position to deliver on time. Most of the time you won't have that luxury and have to compensate the resulting uncertainty with a prolonged time to project completion, but that should be just fine when being communicated in the beginning.

With all that said, remember, that some projects don't require an accurate estimate. Sometimes it is enough to deliver just as soon as possible.

helloindia(10000) 4 days ago [-]

On the experience part, this happened to me few times, when the project manager asked me for an estimation, i give it from my perspective and experience, and then he gives the work to someone else with no experience, and obviously takes more than estimated.

Now, i always give two estimates: An estimation if the work is done by me, and an estimation if the work is done by someone else.

zcanann(3993) 4 days ago [-]

I always think of project tasks as flow charts, where every item either takes 1 day, or 1 week. There's no way of really knowing in advance. Complications happen.

It makes it really hard to calculate the 'expected value' of 5-10 tasks.

dTal(4002) 4 days ago [-]

The more tasks you have to do, the more certain about duration you should be - some of the uncertainty will cancel out and you will get a gaussian distribution. For your example, I expect 10 tasks of between 1 day and 1 week each (with flat probability) to take about 6 weeks in total, with a 95% chance of completion within 7 weeks.

yawz(866) 4 days ago [-]

Even the language is not correct: We all call it 'estimate', but stakeholders behave like it's a 'commitment'. Passage from uncertainty to certainty happens in the language, and all the responsibility is on the engineering team's shoulders.

c0vfefe(10000) 4 days ago [-]

> stakeholders behave like it's a 'commitment'.

Many places treat budgets the same way.

adrianmonk(10000) 4 days ago [-]

Even if you choose the right words, people don't necessarily pay close attention. And if they want a commitment, they may assume that the numbers you give them are a commitment regardless of how you phrase it.

But even if they did listen closely to what you said, 'estimate' is not even a great word. If your car is in a wreck, a body shop gives you an estimate to fix it, and they may treat that estimate as a commitment. It's pretty common practice that people are held to an estimate, or to not going over by a certain small percentage. Maybe we need a word like 'forecast' or 'prediction'.

basetop(10000) 4 days ago [-]

The mythical man month ( required reading for most CS programs ) goes into a historical and production aspect of why software projects take longer than what you think and what you expected.

Also, there is a law named after author called the Brooks's law : 'adding human resources to a late software project makes it later'

https://en.wikipedia.org/wiki/Brooks's_law

In most industries, if you are running behind schedule, you throw more workers at the problem to catch up. For example, laying railroad tracks, digging ditches, deliverying packages, harvesting crops, etc. By adding more workers, you shorten the time it takes to complete the project. But which software engineering, the reverse tends to happen. If you are falling behind, just throwing more developers at the problem worsens the problem. Most likely because you need the new developers to get 'caught up' with the code/project/tools but if you rush that process, then they won't have a full understanding of the project/code/tools and introduce bugs/problems themselves which exacerbates the problem.

It's a fun read if you have the time.

rainhacker(3278) 4 days ago [-]

Given most software project estimations are off, wonder if a corollary of Brook's law can be - don't add resources in later stages of 'any' software project.

v3gas(3548) 4 days ago [-]

> required reading for most CS programs Really?

bunderbunder(3618) 4 days ago [-]

I've been one place that I thought was really good at software estimation. Their system was:

Everything gets a T-shirt size. Roughly, 'small' is no more than a couple person-days, 'medium' is no more than a couple person-weeks, 'large' is no more than a couple person-months.

Anything beyond that, assume the schedule could be unbounded. Figure out how to carve those into a series of no-larger-than-large projects that have independent value. If they form a series of iterations, don't make any assumptions about whether you'll ever even get around to anything but the first one or two. That just compromises your ability to treat them as independent projects, and that creates risk that you find yourself having to worry about sunk costs and writing down effort already expended when it eventually (and inevitably) turns out that you need to be shifting your attention in order to address some unforeseen business development.

At the start of every quarter, the team would commit to what it would get done during that quarter. There were some guidelines on how many small, medium or large projects they can take on, but the overriding principle was that you should under-promise and over-deliver. Lots of slack (1/3 - 1/2) was left in everyone's schedule, in order to ensure ample time for all the small urgent things that inevitably pop up.

There was also a log of technical debt items. If the team finished all their commitments before the end of the quarter, their reward was time to knock things off that list. Best reward ever, IMO.

SketchySeaBeast(10000) 4 days ago [-]

> Everything gets a T-shirt size. Roughly, 'small' is no more than a couple person-days, 'medium' is no more than a couple person-weeks, 'large' is no more than a couple person-months.

That's pretty much exactly what I've ended up using on past projects - a little more fine grained (start at a half day, went up to months), but that was my approach as well, and if I didn't know everything it went up a size.

pwenzel(3973) 4 days ago [-]

The 'small', 'medium', 'large' approach sounds a bit like the one used in Pivotal Tracker.

afarrell(4036) 4 days ago [-]

An important aspect of being a professional software engineer is having the backbone to sometimes say things like:

- "I don't know yet enough about the problem to give you even a rough estimate. If you'd like, I can take a day to dig into it and then report back."

- "This first part should take 2-3 days. 5 on the outside. But the second part relies heavily on an API whose documentation and error messages are in Chinese and Google Translate isn't good enough. I'd need to insist on professional translation in order to even estimate the second part."

- "The problem is tracking down a bug rather than building something, so I don't have a good way of estimating this. However, I can timebox my investigation and if I've not found the cause at the end of the timebox, can work on a plan to work around the bug."

You need to be willing to endure the discomfort of looking someone in the face, saying "I don't know", and then standing your ground when they pruessure you to lie to them. They probably don't want you to lie, but there is a small chance that they pruessure you to. If you don't resist this pruessure, you can end up continually giving estimates that are 10x off-target, blowing past them as you lose credibility, and your running your brain ragged with sleep-deprivation against a problem you haven't given it the time to break down and understand.

But when you advocate clearly for your needs as a professional, people are generally reasonable.

wiz21c(10000) 4 days ago [-]

>> and then standing your ground when they pruessure you to lie to them.

spot on.

JustSomeNobody(3879) 4 days ago [-]

> "I don't know yet enough about the problem to give you even a rough estimate. If you'd like, I can take a day to dig into it and then report back."

This is my default. Even when I am fairly confident I know what the code does, I want to double check. And it isn't just so I don't short change myself on time. If I tell them X days and deliver on 0.X days several times, they'll start thinking I overestimate everything and cut me off at the knees. It pays to get as accurate as possible.

koonsolo(4012) 4 days ago [-]

I once had a manager/CEO who thought you can negotiate the planning, the same as like a price. 'The customer always wants to get it earlier, the engineer later'. Me saying that only the developer is able to give a realistic planning, was probably received as 'developer wants to push the deadline back in his negotiation'.

In the end, his company went bust because he was selling stuff to customers that could never be delivered on time, or even never at all.

maxxxxx(3988) 4 days ago [-]

I'm on a project right now and I am telling people 'I don't know how long it will take because nobody here has ever done anything like this. The only thing I can tell you is that nobody is wasting time and we are solving problems as quickly as we can. Look at the JIRA board and see what we have done and what we are planning to do and make your own estimate'. So far I haven't been fired. I also refuse to stop work and start long planning meetings like I have seen in previous projects.

Aeolun(10000) 4 days ago [-]

> But when you advocate clearly for your needs as a professional, people are generally reasonable.

This has not been my experience. People want 'estimates' at all costs, tell you to not worry about any accuracy, and then a week later tell your manager you committed to x date.

hateful(4058) 4 days ago [-]

> You need to be willing to endure the discomfort of looking someone in the face, saying "I don't know", and then standing your ground when they pruessure you to lie to them.

This. I have PMs who will ask me over and over until I give a number and I've learned to stand my ground. Because if I don't, I end up being responsible for the estimate I've given (as I should).

Now I make it clear that I will not give a number until I know more. Just a few weeks ago I was asked to give an estimate of how long it would take to do an integration with something I've never used before. I said, 'I need 1-3 days to learn the product enough to give an estimate'. What I get back is 'can't you just give WAG'. But this time around I said 'You have two choices, 1. you give me time to learn what I need and then I'll give an estimate or, 2. Find someone who already knows the product to give you an estimate'

taherchhabra(4086) 4 days ago [-]

once my CEO asked: 'can we add more developers and finish the project faster'. I used this famous quote '9 mothers cannot deliver a baby in one month' and we had a big laugh

koonsolo(4012) 4 days ago [-]

> They probably don't want you to lie, but there is a small chance that they pruessure you to.

In my experience, most of the time there was pressure. I remember a funny conversation between my project manager and a colleague:

PM: 'We need to have X as soon as possible, how long will it take you?'

Dev: 'Oh that's, easy, I can finish it tomorrow.'

PM: 'Tomorrow??? That's impossible, because it needs a and b, no?'

Dev: 'Yes you are right... I can probably finish it by next week.'

PM: 'Next week??? That is a long time! It's only a, b, c and d that needs to be done. Does it all take that long?'

Dev: 'You are right, I can probably finish it sooner. I can get it ready by this Friday.'

PM: 'That is perfect! The customer wanted to put in production on Monday, so we can now confirm. Thanks!'

I overheard that story (open office), and it was a real facepalm moment.

dboreham(3656) 4 days ago [-]

It's important to realize that the workplace often involves being subjected to what is really a kind of psychological abuse. It is not, I think, a coincidence that we are learning that many 'stellar managers' turn out to be abusers, e.g. guilty of sexual harassment. What you're describing is in psychological terms thought of as 'maintaining boundaries' against a person who is intent on transgressing those boundaries for their own gain or amusement.

If you validate the invalid assertions of these abusers (e.g. that some unclear task can be estimated with certainty) then you're enabling the abuse.

teddyh(2414) 4 days ago [-]

I would recommend this talk regarding professionalism:

https://www.youtube.com/watch?v=p0O1VVqRSK0#t=5m20s

EDIT: The part about estimates is here: https://www.youtube.com/watch?v=p0O1VVqRSK0#t=36m56s

snarf21(10000) 4 days ago [-]

I agree with you but like others have said, they want a date they can sell to and in the end it will be your fault.

It isn't just software that goes double, look at most custom building projects or government contracts. Most everything takes twice as long and costs twice as much.

I read something once that has stuck with me. 'We can never give more than a vague guess because we are literally building something that doesn't exist and has never been built before.'

* I do find it annoying that the sales teams generally drive this. I wonder how they would feel about a response of 'We'll get you an estimate of when it will be available as soon as you get us an estimate of when this feature will generate the $1M of revenue to cover the costs of building it. Also, please give us a list of customers that are likely to buy it and your expected contract date so we can track it along with development.'

singingfish(10000) 4 days ago [-]

I like to mess with people. 'It's 80% done so there's only 80% left to go'. 'Now' it's 90% done so there's only 90% left to go' etc. Of course my current project is rather higher stakes than usual, so I'm being a little bit more sober than I normally would be. However I am emphasising the need for communication with other stakeholders and identifying and trying to rectify where they are bottlenecks pdq.

a_c(4104) 4 days ago [-]

I always consider four factors in estimating software cost

- the scope of the task

- the resource/people you have

- the confidence in the resource/people

- and the dependency on both the resource and the task

Estimation goes wrong in many ways. In my experience, I categorize as follow:

- estimated by non-technical PM, unable to gauge technical/business complexity

- estimated too early in development life cycle. Often time someone come up with idea X then ask right away 'shouldn't be difficult, how long will it take'

- no view on team member's productivity due to lack of measurement or measuring the wrong thing

- not considering dependency in the development cycle. e.g. The question 'whether our only backend engineering available for the task' is often omitted

- not considering testing, documentation, 3rd party integration/procurement, maintenance and deployment cost

The common practise of coming up a number, then double, or maybe triple it up to make the estimation does not address why estimation is off. Mis-estimation is a symptom. The cause and the cure lies in the people

moring(10000) 4 days ago [-]

On the other hand, you can only argue about the estimate if there is even a channel to argue about it. I have seen several cases of (1) people being told about a deadline through various proxies, so there was no point in time where they could even question the imposed deadline, (2) people not being told about the deadline at all until it was missed, so they could not argue against it, (3) people not being told about the task that has a deadline until after that deadline was missed.

All involved engineers actually had the backbone to say it was impossible, but no chance to say it. Obviously, projects did not go well at that place.

duxup(3873) 4 days ago [-]

Also clear communication (although you are pretty much describing that too).

I had a friend who had an unusually high frequency of conflicts with various management, project managers (I expect there are always some, but this guy had a lot of bad experiences).

He showed me an instant message conversation:

PM: 'Is X done?'

Dude: 'Yes, but a, b, c is not done'

A, b, and c were parts of X's requirements.

Now the project manager or such should have been able to read between the lines, but man don't lead with 'Yes' and actually describe a situation that is 'No'.

cs02rm0(4090) 4 days ago [-]

I find you have to do this, but it can damage professional relationships.

Recently I've flat out said no, as gently as I could, to doing some work - porting a software stack to run on Windows.

A customer is asking for an estimate to do it, because a project manager who wouldn't be developing or supporting (or even using) the system likes Windows. The sales manager isn't thrilled with me over it.

So he's asked the team as a whole to estimate it because I wouldn't. The rest of the guys are frontend devs. I don't have a clue, they're in a worse position but for some reason they're all up for putting a number to it so I look like a bit of a !

I'm a contractor, so I can and would just turn down the work, but it's difficult politically.

taneq(10000) 4 days ago [-]

100% this. It's not your job to say something that will make your client happy. It's your job to tell them your honest assessment of the task.

agumonkey(929) 4 days ago [-]

I wish school taught me how to do your first point.

People talk about project management, gantt diagrams and all that but you have no idea how to estimate even grossly the complexity of a thing.

GlennS(10000) 4 days ago [-]

Well, this is some good advice.

Another nice trick if you're under time pressure is to arrange the project so that you can deliver it in stages that partially meet the requirements, or even deliver something non-functional that people can play with while they wait.

This can keep the sales team rolling or customers happy for a while, buying you some time to write the program properly.

As a bonus, you might get earlier feedback if you're going in the wrong direction, in which case less time wasted.

(Obviously this approach isn't always possible. Many problems won't decompose nicely.)

Cthulhu_(10000) 4 days ago [-]

The other thing you can always refer back to is that the accuracy of an estimate increases the closer it is to finishing it - that is, it's very inaccurate at the start. This is one of the pillars of the agile way of working - you and your team can give a fairly accurate estimation for what can be finished within a week or two. Not so much for a year.

The longer estimations are very much due to changing priorities and a developing environment and understanding of what is being built though, hence the inaccuracy.

artworx(10000) 4 days ago [-]

I highly recommend the book 'Software Estimation: Demystifying the Black Art'. I work for an outsourcing company and part of my job is to come up with estimates and it helped me deal with clients and managers.

The book contains a quiz that we used as part of a training exercise with management and the results were hilarious. Here is an online copy: https://scrumandkanban.co.uk/how-accurate-are-your-estimates... please don't look at the answers, I guarantee you will have fun completing it.

For clients, my preferred approach is to show 'The Cone of Uncertainty' and ask where they think they are. Since most people have no idea what they want to built I ask if its ok if my estimates are 4x off. That usually gets me a few weeks of peace while a team comes up with a product definition and we start all over again :)

tonyedgecombe(3892) 4 days ago [-]

>But when you advocate clearly for your needs as a professional, people are generally reasonable.

You should do all this but there will still be times when people are unreasonable, it's just human nature.

temp269601(10000) 4 days ago [-]

How about making the manager estimate the project, that way if the deadline is not met, the manager receives the blame? It's the manager's job to manage resources, and if the deadline is not hit, then they can hire/bring on more resources. If an engineer works as hard as they can for 40 hours a week, why is it the engineer's fault if the arbitrary deadline is not met? If the engineer estimate's time for a project, the engineer will always have to work more than 40 hours a week because some estimates will be too optimistic.

ben509(10000) 4 days ago [-]

The manager does receive the blame, and then stuff rolls downhill.

jcon321(10000) 4 days ago [-]

Maybe if your PM has any technical experience

mbesto(3053) 4 days ago [-]

As always, my favorite article on this subject: https://www.lesswrong.com/posts/CPm5LTwHrvBJCa9h5/planning-f...

> A clue to the underlying problem with the planning algorithm was uncovered by Newby-Clark et al., who found that

Asking subjects for their predictions based on realistic "best guess" scenarios; and

Asking subjects for their hoped-for "best case" scenarios . . . . . . produced indistinguishable results.

> So there is a fairly reliable way to fix the planning fallacy, if you're doing something broadly similar to a reference class of previous projects. Just ask how long similar projects have taken in the past, without considering any of the special properties of this project. Better yet, ask an experienced outsider how long similar projects have taken.

jfehr(10000) 4 days ago [-]

Daniel Kahnemann calls this the 'inside view' and 'outside view', from his book Thinking Fast and Slow.

The relevant excerpt (mostly an anecdote that serves as an introduction to a whole chapter about it) can be found here: https://www.mckinsey.com/business-functions/strategy-and-cor...

StreamBright(2659) 4 days ago [-]

There would be great to have predictions that use ML to estimate how long something will take.

ska(10000) 4 days ago [-]

If you are thinking this as opposed to statistical modelling, what is the benefit you imagine?

harias(3600) 4 days ago [-]

COCOMO and Function point models use regression. That's ML too.

usgroup(3456) 4 days ago [-]

TLDR anyone?

markwkw(10000) 4 days ago [-]

Thesis: Developers are good at estimating median time to finish tasks. But the tasks that take longer, in fact take much, much longer than estimated.

E.g. Dev estimates that time to complete each of A, B, C tasks will be 2 days. In reality, A will take 1 day, B will take 2 days, but C will take 8 days.

Dev was right about the median time to complete each task (2 days) but average was much higher. Article goes into how to statistically model the distribution of actual time to complete tasks.

oli5679(2459) 4 days ago [-]

In the UK, bookmakers offer 'accumulator' bets, where a punter can select many outcomes, getting a big prize if 100% correct.

This takes advantage punters' failure to accurately multiply probability - 10 events with 80% probability have joint probability <11%.

Something similar happens with planning, where people fail to compound many possible delay sources accurately.

Dani Khaneman covered this in Thinking Fast and Slow, also showing that people overestimate their ability, thinking they will outperform a reference class of similar projects because they are more competent.

https://en.m.wikipedia.org/wiki/Planning_fallacy

Sahhaese(10000) 4 days ago [-]

It doesn't take advantage in the way you say because you get paid along the same lines. If you bet an accumulator with 2 selections at evens, you get paid at 3/1. (4.0 decimal odds) so that is 'fair'.

It's profitable for bookies because they have a house edge, and that edge is increased the more subsequent bets you make. The house has more edge with an accumulator than a single bet.

People like to do accumulators because it's more meaningful to win large amounts occassionally than more regularly win less meaningful amounts.

So it's a 'trick' to simply increase gambling.

If you had to pick out 8 evens shots in sequence and intended to roll over your bet each time it would have the same effect / outcome, but starting with a pound by the last bet you're effectively placing a 128 quid bet on an evens money.

It's not that the player thinks that they have a better chance of winning than 1/256 it's that it effectively forces them to gamble a lot larger amount in the situations where only 7 out of 8 of their picks come in.

And that's before considering the edge. If we consider that these are probably events that happen more like only 45% of the time (at best) then instead of a 255/1 shot we're looking at 600/1 shot.

Chris2048(3384) 4 days ago [-]

I'm sorry to contribute to the dogpile effect (long thread probably says same thing I'm about to say, but I didn't see it..), but..

devs estimate known risks. Ideal path + predictable delays. The further reaches of the long tail are the unknown risks.

known risks are estimated based on knowledge (hence, a question for a dev), unknown risks are just an adjustment parameter on top of that estimate, possibly based on historical evidence (there is no reason a dev could estimate any better).

It should be managements job to adjust a dev estimate. Let's be real here - I've never heard a real life example of management using stats for this kind of thing, or being enthusiastic about devs doing the same.

Perhaps if management is taken seriously as a science, things will change, but I doubt it.

<strong_opinion type='enterprise_software_methodology_cynicism'>

Bizness is all about koolaid-methodology-guru management right now, very much the bad old ways - a cutting example of workable analytical management would be needed for things to change, but this is unlikely as all the stats people are getting high pay cool ML jobs, and aren't likely to want to rub shoulders with kool-aider middle managers for middle-management pay..

</strong_opinion>

ska(10000) 4 days ago [-]

It's actually not super uncommon for management to use statistical tools for this, although they may not realize what exactly is going on. For example there are a couple of tools whose names escape me at the moment that extend or wrap MS Project but have a statistical time estimate piece based on quintile time estimates (e.g. your engineers give you 50% and 90% conf estimates, it applies simple modeling).

I've also been at one shop where we used a more sophisticated modeling approach, but built in house. Reception was warm in at least most of management.

So my experience isn't that these tools don't exist, or that management is unwilling to use them - but rather that management has trouble accepting the implications of using them properly. Specifically, if any of the inputs of the model change, you should update those and re-run the model. With linear (e.g bullshit) GANTT, you just say 'oh, we missed this week of work, move the endpoint forward a week'. With more careful modelling adding an unexpected dependency or a couple days work in the wrong place can suddenly add 3 months to your worst-case model. Exec's really don't like that happening without a whole song and dance about why, so there is a tendency to 'freeze' the models in ways that they become progressively less useful. Worst case they are asymptotic on meaningless.

mikekchar(10000) 4 days ago [-]

The interesting thing is that by the central limit theorem, the mean of a mean is normally distributed. This is extremely helpful. Here's what I suggest you do:

Same size your stories to small values. Do 30 stories in a sprint and take the mean. Do 30 sprints and take the mean of the sprint. What you get is the mean amount of time to do a sprint of 30 stories. What's amazing is that this estimate will be normally distributed. You can measure the variance to get error bars.

Of course 900 stories to get good estimates ;-) However, imagine that your stories averaged 2 days each. Imagine as well that you have a team of 10 people. That means that you will finish a 'sprint' of 30 stories in 6 days (on average). 30 sprints is 180 days -- the better part of a year, but you probably don't need a 95% confidence interval.

You will find that after a few sprints, you'll be able to predict the sprint length pretty well (or if you set your sprints to be a certain size, then you will predict the number of stories that will fit in it, with error bars).

The other cool thing is that by doing this, you will be able to see when stories are outliers. This is a highly undervalued ability IMHO. Once a story passes the mean plus the variance, you know you've got a problem. Probably time to replan. If you have a group of stories that are exceeding that time, then you may have a systemic estimation problem (often occurs when personnel change or some kind of pressure is being applied to the team). This kind of early warning system allows you to start trying to find potential problems.

This is really the secret behind 'velocity' or 'load factor' in XP. Now, does it work on a normal team? In my experience, it doesn't because groups of people are crap at calmly using statistics to help them. I've had teams where they were awesome at doing it, but that was the minority, unfortunately.

piccolbo(10000) 4 days ago [-]

The central limit theorem is in the limit for the number of variables in the sum approaching infinity. In the finite world, the article explains how it's done. The article is saying, the sum of lognormals is not normal. You are saying: take enough of them and it is normal. The article is still more accurate than your reasoning for 30 stories. From the wikipedia entry for Central limit theorem ' As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails'. To prduce a 95% confidence intervals, you have to upper-bound the tails. All methodologies that are based on sum of subtasks estimates are not evidence based. But we knew already sw methodologies are not evidence-based, did we?

pjungwir(3467) 4 days ago [-]

Here is a story about the woes of interpreting statistical distributions:

I have two habits when I estimate: First, I like to give a range or even better a triple estimate of 'optimistic', 'expected', 'worst case'. (And btw, you should expect things to weight toward the pessimistic side, because you always find new requirements/problems, but you rarely discover a supposed requirement is not really one.)

Second: I like to break down a project into tasks of just a few hours and then add everything up. I usually don't share that spreadsheet with the customer, but it helps me a lot. Pretty much always it gives me a number higher than I'd like, but which is almost always very accurate. A couple times I've ignored the result and given a lower estimate because I really wanted some project, and it has always turned out the spreadsheet was right.

Well, one time I combined these two approaches, so that my very finely-chopped estimates all had best/expected/worst values, and I shared that with the customer. Of course they took one look at the worst-case total and said, 'How can a little thing like this possibly take 2 years??' I didn't get the work. :-)

EDIT: Btw it feels like there is a 'coastline paradox' here, where the more finely you estimate, the higher the max possible, so that you can make your estimate grow without bound as long as you keep splitting items into greater detail. It'd be interesting to see the math for that.

pjungwir(3467) 4 days ago [-]

EDIT2: In spite of my personal experience I do think the author makes a strong case for this: 'Adding up task estimates is a really misleading picture of how long something will take.' Perhaps I've had better results because I try to give myself a little padding in every task, just considering that everything requires not just typing code but getting on some calls, having back-and-forth in emails and the project management tool, testing (automated or manual), fixing a thing or two, etc. So my individual estimates are probably a bit higher than median. When I work with other developers they consistently estimate lower than me. But my numbers are deliberately not 'best case', because then you know you'll go over on the total.

nocturnial(10000) 4 days ago [-]

Wouldn't it be more sensible to give a range instead of a fixed date? I know it's not going to happen, but I think it would be more informative and honest.

That way you could communicate better of how certain you are. There's a difference between telling something will be completed in, for example, 6 month +- 2 weeks. Or something will be finished in 6 months +- 1.5 months. The estimated time is the same but communicates the level of certainty more clearly.

Or just use between, for example, 4-6 months if you don't want to use +- notation.

trustfundbaby(1155) 4 days ago [-]

Never works. Management will just use your lowest estimate.

So you pad that and realize its the same as your high estimate, so you just give them that instead ... then they complain its too high ... rinse repeat. Unless you stand you ground.

DougWebb(4082) 4 days ago [-]

In my experience, developers understand and appreciate range-based estimates. But when those numbers start moving up the communication chain, some non-developer is going to either not like or not understand the point of the range, and will convert it to a single number: either the first one, the last one, or the average. They might even be honest about it, thinking 'I need to know the earliest possible date, so I'll use the low end'. But then the next person, who doesn't see the range, thinks that low end is THE committed date and will plan accordingly. Now your deadline is 99% likely to be missed.

SilasX(3980) 4 days ago [-]

Pet theory: this is entirely explained by unknown systems not behaving as expected. As developers, and unlike e.g. carpenters, we are constantly using new tools with effects we haven't yet experienced. Then we have to yak-shave to get around their heretofore unknown kinks. Then the time blows up.

If and when you're using known features of a known framework, and that's all you're doing, the estimates are accurate and the work is completed quickly.

maltalex(3616) 4 days ago [-]

I disagree. Estimations tend to be just as wrong even when the tools are well known.

There's always that one edge case you haven't considered, that one algorithm that doesn't work as well as you expected, that small change to the requirements the requires a completely different approach.

scandox(2600) 4 days ago [-]

In the world of small medium projects often the major issue is that software engineers give estimates for writing the software but customers take that to mean time to actual delivery in production and a lot of the time have no idea how big a task deployment and integration are...or don't even have a plan for that.

maxxxxx(3988) 4 days ago [-]

in medical devices it usually takes five times as long to really finish the project vs. finishing development.





Historical Discussions: 5G Is Likely to Put Weather Forecasting at Risk (April 16, 2019: 661 points)

(661) 5G Is Likely to Put Weather Forecasting at Risk

661 points 4 days ago by szczys in 1665th position

hackaday.com | Estimated reading time – 7 minutes | comments | anchor

If the great Samuel Clemens were alive today, he might modify the famous meteorological quip often attributed to him to read, "Everyone complains about weather forecasts, but I can't for the life of me see why!" In his day, weather forecasting was as much guesswork as anything else, reading the clouds and the winds to see what was likely to happen in the next few hours, and being wrong as often as right. Telegraphy and better instrumentation made forecasting more scientific and improved accuracy steadily over the decades, to the point where we now enjoy 10-day forecasts that are at least good for planning purposes and three-day outlooks that are right about 90% of the time.

What made this increase in accuracy possible is supercomputers running sophisticated weather modeling software. But models are only as good as the raw data that they use as input, and increasingly that data comes from on high. A constellation of satellites with extremely sensitive sensors watches the planet, detecting changes in winds and water vapor in near real-time. But if the people tasked with running these systems are to be believed, the quality of that data faces a mortal threat from an unlikely foe: the rollout of 5G cellular networks.

Where's the Water?

To understand how a new generation of wireless technology can deleteriously impact weather forecasting, it helps to take a look at exactly what powers the weather, and what these satellites are looking at. Our weather is largely the result of differences between air masses. Pressure, temperature, and moisture, each determined by energy inputs from the Sun, all team up in a complex manner to determine where and when clouds will form and which direction the winds will come from. Remotely sensing these differences is the key to accurately forecasting the weather.

The satellites that watch our weather are largely passive sensor platforms that measure the energy reflected or emitted by objects below them. They gather data on temperature and moisture — pressure is still measured chiefly by surface measurements and by radiosondes — by looking at the planet in different wavelengths. Temperature is measured mainly in the optical wavelengths, both visible and infrared, but water vapor is a bit harder to measure. That's where microwaves come in, and where weather prediction stands to run afoul of the 5G rollout.

NASA's Advanced Microwave Sounding Unit (ASMU-A1). Source: ESA

Everything on Earth – the plants, the soil, the surface water, and particularly the gases in the atmosphere – both absorb and, to a lesser degree, emit microwave radiation. Measuring those signals from space is the business of satellites carrying microwave radiometers, essentially sensitive radio receivers tuned to microwave frequencies. By looking at the signals received at different wavelengths, and by adding in information about the polarization of the signal, microwave radiometry can tell us what's going on within a vertical column of the atmosphere.

For water vapor, 23.8-GHz turns out to be very useful, and very much in danger of picking up interference from 5G, which will use frequencies very close to that. Since microwave radiometers are passive receivers, they'll see pretty much everything that emits microwave signals in that range, like the thousands of cell sites that will be needed to support a full 5G rollout. Losing faint but reliable water vapor signals in a sea of 5G noise is the essential problem facing weather forecasters, and it's one they've faced before.

Real World Consequences

At the 2019 annual meeting of the American Meteorological Society, Sidharth Misra, a research engineer at NASA's Jet Propulsion Laboratory, presented data showing how commercial enterprises can have unintended consequences on the scientific community. Between 2004 and 2007, satellite-based microwave radiometers detected an increase in noise in a curious arc across the top of the United States. A similar signal was detected by another satellite, with the addition of huge signals being returned from the waters off each coast and the Great Lakes. The signals turned out to be reflections from geosynchronous direct TV satellites, bouncing off the surface and swamping the water vapor signals the weather satellites were trying to measure.

Reflections from DTV satellites can effectively blind microwave radiometers. Source: AMS meeting panel discussion, "The Wizard Behind the Curtain?—The Important, Diverse, and Often Hidden Role of Spectrum Allocation for Current and Future Environmental Satellites and Water, Weather, and Climate"

But surely the scientists are overreacting, right? Can losing one piece of data from as complex a puzzle as weather prediction really have that much of an impact? Probably yes. The water vapor data returned by microwave radiometers like the Advanced Microwave Sounding Unit (AMSU) aboard a number of weather satellites is estimated to reduce the error of weather forecasts by 17%, the largest contributor by far among a group of dozens of other modalities.

The loss of microwave water vapor data could have catastrophic real-world consequences. In late October of 2012, as Hurricane Sandy barreled up the East coast of the United States, forecasts showed that the storm would take a late turn to the northwest and make landfall in New Jersey. An analysis of the forecast if the microwave radiometer data had not been available showed the storm continuing in a wide arc and coming ashore in the Gulf of Maine. The availability of ASMU data five days in advance of the storm's landfall bought civil authorities the time needed to prepare, and probably reduced the casualties caused by the "Storm of the Century", still the deadliest storm of the 2012 season.

Superstorm Sandy would have been predicted to track into the Gulf of Maine (red) without microwave water vapor data. It actually landed in New Jersey, as predicted five days out with the satellite data (black).

Auction Time

So exactly where are we with this process? The FCC auction of licenses for the Upper Microwave Flexible Use Service (UMFUS), which offers almost 3000 licenses in the 24-GHz band, began on March 14, 2019, despite a letter from NASA Administrator Jim Bridenstine and Secretary of Commerce Wilbur Ross requesting that it be delayed. FCC Chairman Ajit Pai rejected the request, stating that there was an "absence of any technical basis for the objection."

Will the 5G rollout negatively impact weather forecasts? It's not clear. Licensees are required to limit out-of-band emissions, but with so many 5G sites needed to cover the intended service areas, and with the critical 23.8-GHz water vapor frequency so close to the UMFUS band, there's not much room for error. And once the 5G cat is out of the bag, it'll be difficult to protect that crucial slice of the microwave spectrum.

Whatever happens, it doesn't look good for weather forecasting. The UMFUS auction proceeds apace, and has raised almost $2 billion so far. Companies willing to spend that much on spectrum will certainly do whatever it takes to realize their investment, and in the end, not only will science likely suffer, but lives may be put at risk for the sake of 5G as our toolset for predicting dangerous weather faces this new data-gathering challenge.




All Comments: [-] | anchor

oceliker(10000) 4 days ago [-]

I'm somewhat confused. I'll admit that I'm not very familiar with super high frequency radio, but isn't the difference at least 200 MHz, approximately 10 times larger than the entire FM radio spectrum? Doesn't out-of-band emission stop being a problem at that much separation? Or should we look at it relative to the base frequency?

edit: For what it's worth, I found this paragraph from the FCC last year: https://www.federalregister.gov/d/2018-14806/p-20 It sounds like they're saying 'we don't know if this will be a problem yet, but be prepared to limit emissions in 23.6-24ghz range because we might require it at some point'.

Also, paragraph 9 of the same document has the actual band limits (with a special requirement) if anybody is interested:

> The 24 GHz band consists of two band segments: The lower segment, from 24.25-24.45 GHz, and the upper segment, from 24.75-25.25 GHz

> any mobile or transportable equipment capable of operating in any portion of the 24 GHz band must be capable of operating at all frequencies within the 24 GHz band, in both band segments

staticfloat(10000) 4 days ago [-]

While bandwidth is typically measured in absolute Hz difference due to the spread directly relating to the amount of information-carrying capacity, when you are actually generating signals many things scale with carrier frequency.

The way most of these signals are being generated is through an 'information signal' (usually referred to as a modulator) being created within your device, then moved up in frequency by combining it with a 'carrier signal'. This separation allows the information signal's properties (such as bandwidth, modulation scheme, bitrate/Hz, etc...) to be more-or-less independent from the physical characteristics of propagation, which will be more strongly related to the carrier signal's properties (e.g. water absorption, reflection/transmission off of/through materials, etc...).

However, no process is perfect, and although we would like to generate perfect signals, we can't. Distortion appears in multiple parts of this pipeline, both in the generation of the low-frequency modulator and the high-frequency carrier. Distortion typically comes from 'linear' components not being perfectly linear and thereby generating harmonics of the signal passing through them. In the case of the modulator, this splatters signal energy up and down the spectrum on the order of the bandwidth of the modulator, but in the case of the carrier it does so on the order of the carrier frequency. This is all a matter of degree and depending on the application may not be that big a deal, but it definitely must be addressed for high-density communications equipment like cell networks.

All transmitters have filters at multiple stages of their signal processing chains, both lowpass filters that filter the modulator before it's mixed with the carrier signal to boost it up in frequency, as well as bandpass filters that ensure the output stays within the bounds it's meant to be; but these filters only do so much and they can be expensive to create (as they can have rather fine physical tolerances) so everybody is always just playing the 'do the best job we can for the least money' game.

Luckily, most transmitters are also connected to antennae that provide a convenient filtering on the output (they only resonate at the frequencies they transmit at) which helps for specialized systems that only operate at a single transmit frequency, but for something like 5G is less helpful, due to the many channels it supports, causing the antenna to necessarily support a wide range of frequencies.

borkt(4099) 4 days ago [-]

'At the high end of the electromagnetic spectrum, signals travel over a band of 10 million trillion Hz (that is, 10^22Hz). This end of the spectrum has phenomenal bandwidth, but it has its own set of problems. The wave forms are so miniscule that they're highly distorted by any type of interference, particularly environmental interference such as precipitation. Furthermore, higher-frequency wave forms such as x-rays, gamma rays, and cosmic rays are not very good to human physiology and therefore aren't available for us to use for communication at this point.'

Not the best analysis but I'm working. Basically like any other metric, as the frequency increases the space between peaks and valleys decreases and it becomes harder to determine/separate from others 30hz to 230hz is much easier to tell the difference than 15khz to 15.2khz if you want to listen to audio tones. Once you get to microwaves this becomes of course much more difficult.

http://www.informit.com/articles/article.aspx?p=24687&seqNum...

ummonk(4069) 4 days ago [-]

It should definitely be looked at relative to the base frequency.

derekp7(4059) 4 days ago [-]

I was wondering, what exactly 5G will bring us. For the most part, all the tasks I need to do on a phone (pocket computer / communicator) can be done even with 3G (video, streaming music, and any website's loading time is more than acceptable at 3G speeds).

The only thing I can think of is 5G will allow for more overall network bandwidth, so the data caps on 'unlimited' plans wouldn't be needed. But compared to how we use our phones today, what new items will be be able to do with 5G that we can't do with current 4G/LTE?

robmiller(3745) 4 days ago [-]

5G attempts to market a compelling reason to upgrade a phone and stave off the slowing phone replacement cycle?

jdietrich(4107) 4 days ago [-]

>But compared to how we use our phones today, what new items will be be able to do with 5G that we can't do with current 4G/LTE?

5G has the potential to provide throughput and latency that is comparable to a fixed broadband connection. In reasonably competitive markets (i.e. not the US), that's A Big Deal. Latency is a particularly acute issue in many applications; 4G generally adds about 50ms in the best case scenario, but 5G can easily provide sub-millisecond latency. Imagine a near-future where it simply doesn't matter whether you're on WiFi or cellular, because they both provide the same experience.

drdaeman(4103) 4 days ago [-]

I think I've read something about significant improvements to latency. Don't know if true or not.

Latency-sensitive applications over 4G/LTE, like SSH or games feel sort of sluggish.

joshmn(2333) 4 days ago [-]

> so the data caps on 'unlimited' plans wouldn't be needed

They're not even needed now. It's all about money.

tomschlick(3049) 4 days ago [-]

> I was wondering, what exactly 5G will bring us.

WISP internet service to the home.

ronnier(1947) 4 days ago [-]

Have a decent connection where there's a large gathering of people.

cobookman(3535) 4 days ago [-]

4G offers enough speed to replace my home internet connection, however aggregate bandwidth at the cell towers prevented this.

I assume with 5G we're just about at the point where you could ditch the home internet connection and go 100% cellular.

amalcon(10000) 4 days ago [-]

Demand for bandwidth has historically expanded to match the bandwidth available. Folks will find some use for it.

Chris_Chambers(10000) 4 days ago [-]

How are they supposed to discreetly kill problematic slaves from a distance with multiple cancer rays focused on the target's body if they don't have the proper equipment in place?

MrMember(3785) 4 days ago [-]

With data caps I don't see much use for it. I still think it's hilarious that carriers advertise download speeds that would blow through most people's monthly data allowance in at most a couple of minutes.

JohnFen(10000) 4 days ago [-]

As near as I can tell, the only benefit that consumers will see from 5G is that it will resolve a real problem with the cell system right now: capacity.

In very congested areas, the total supportable capacity is already being completely utilized. This is why people in very congested areas experience call drops or problems with network availability. 5G would go far to resolve this.

However, that's the only consumer-level benefit that seems realistic to me, and it would only be noticed by the people in dense areas. All the other stuff, such as increased speeds, etc., appears to be nonsense.

microcolonel(4103) 4 days ago [-]

I'm thinking it's probably an ingenious marketing move intended to force the people to install equipment from a certain country despite concerns, in order to keep up appearances that investments are being made in the technology.

From a consumer perspective, LTE's great in its current form, I've never seen any actual person complain about network speeds in a region where LTE is properly deployed, I don't know of any application that struggles on this infrastructure.

kalleboo(3856) 4 days ago [-]

The most common suggestions I see are mobile applications that benefit from the lower latency afforded by 5G - stuff like VR/AR, gaming, self-driving cars.

Aside from that the real reason money is being poured into 5G appears to be to replace the last mile / home internet.

szczys(1665) 4 days ago [-]

You should listen to Shahriar Shahramian on The Amp Hour Podcast: https://theamphour.com/430-shahriar-discusses-5g/

There's a ton of interesting things, like 5G is for a lot more than cellphones. I've always heard that 5G will have phased array and I wondered how that would work -- he talks about possibility that the array is on the roof of a car and your devices will connect back to that. Lots of issues, like size of antenna and power consumption, to get those blazing fast speeds.

i_call_solo(10000) 4 days ago [-]

You're making the assumption that 5G and future advanced wireless systems are exclusive to phones. In the future this will not be the case at all.

johnmoberg(3582) 4 days ago [-]

One use case could be high resolution, low latency video streaming for remote control of autonomous vehicles, e.g. trucks.

posixplz(10000) 4 days ago [-]

Nothing, really. The bottleneck for cell data throughout (and latency) is backhaul. Improving backhaul isn't sexy like changing the "4" to a "5". Improving backhaul is a very expensive (and slow). And counter to one'a business goals - telecoms in the US want just the right amount of congestion to justify network upgrade subsidies, even if that means they do not adhere to ITU-T requirements.

The best "feature" of 5G is that it's a great opportunity institute rate-hikes.

module0000(10000) 4 days ago [-]

Human(individual) customers won't reap the huge benefit IMO. This is a big step up for commercial sites though, where cellular modems serve as fail-over for fiber/commodity service. These customers typically have unlimited data plans, and are more than happy to pay any outrageous data fees that keeps their network operating.

neom(2129) 4 days ago [-]

Ajit Pai seems to think it's about: Rural Connectivity. Fiber Infrastructure. National Competitiveness: https://youtu.be/jKbAdEVOaDY?t=406 (Ajit starts around the 8 minute mark)

kllrnohj(10000) 4 days ago [-]

> can be done even with 3G (video, streaming music, and any website's loading time is more than acceptable at 3G speeds).

I think you forgot how slow 3G was/is. It's around 512KBps. You can stream music on that, but streaming video is a straight nope (480p on youtube wouldn't even be happy with 3G) and even just browsing the web will be a frustratingly slow experience.

4G LTE definitely has the bandwidth to do all those things are reasonable quality/throughput particularly for the resolution & form factor of the device in question, though.

mholt(1759) 4 days ago [-]

Can someone help me understand this better? What is 'very close' to 23.8-GHz frequencies? I don't know which bands 5G operates on, but it seems [1] that the closest they get, at least in the US, is ~27 GHz. If the FCC is auctioning 3000 licenses for the 24 GHz space, is that the space that can potentially interfere more? Can 5G operate on just any frequencies, then?

[1]: https://www.cablefree.net/wirelesstechnology/4glte/5g-freque...

mrweasel(4010) 4 days ago [-]

Europe, at least the EU has settled on 24GHz, that's pretty close to the 23.8GHz.

TrueDuality(10000) 4 days ago [-]

The carrier signal of a new technology could potentially use any frequency that isn't already in use. A lot of the frequency spectrum is already accounted for and generally parts of the spectrum that are important to scientific research are internationally protected (Such as the resonant frequency of hydrogen which is used extensively in radio astronomy).

That being said it is more technically difficult to go 'up' in frequency. Lower frequencies have less bandwidth, but are easier to generate, can go through physical objects better and for longer distances.

This fight to me looks like the industry wants to use a cheaper frequency that meets their minimum technical goals (in terms of development costs not necessarily licensing costs) science and other applications be damned.

Jeff_Brown(4096) 4 days ago [-]

So how hard is it to limit the 5G signal to bandwidths that don't interfere with weather forecasting, and how hard is it to detect and enforce laws against such bandwidth spillover?

toomuchtodo(2496) 4 days ago [-]

Easy to detect, hard to overcome cellular network lobbyists.

wyldfire(626) 4 days ago [-]

> detect and enforce laws against such bandwidth spillover?

This phenomenon is called adjacent-channel interference and violations can and would be enforced by the FCC. The challenge isn't detection or enforcement, it's the challenge of the different government agencies to balance the people's needs properly.

FCC wants to auction off the spectrum to benefit telcos and their customers. NOAA wants to protect people with accurate predictions of hazardous weather. Your question presumes that the weather prediction function is more valuable, but the government may not reach that same conclusion.

woliveirajr(3435) 4 days ago [-]

> Does this also mean that 5G will suck, when it's raining? [from a comment below the article]

If 5G uses almost the same frequency where microwaves detect water vapour (around 24 GHz), won't the weather have a great impact on it?

Also, I always thought that such small waves would have problems with obstacles, with good signal just when your phone is in line-of-sight with antennas.

penagwin(4099) 4 days ago [-]

That's all correct, the higher frequency suffers from worse object penetration. Solutions I've heard was that 5G would likely involve neighborhood or even building repeaters.

IMO 5G is massively overhyped. My iPhone 7+ isn't limited by 4G LTE, it's limited by Verizon deciding to only allow it 10mbps down (with great signal). 5G won't matter one bit if the current bottleneck isn't 4G LTE in the first place.

chrisswanda(4080) 4 days ago [-]
cptskippy(10000) 4 days ago [-]

> If 5G uses almost the same frequency where microwaves detect water vapour (around 24 GHz), won't the weather have a great impact on it?

There's a general misunderstanding about the technology that leads people down this road of thought.

5G is broken up into two frequency ranges, FR1 and FR2. FR1 is everything below 6Ghz and encompasses the same spectrum as traditional cellular technologies. FR2 is everything over 24Ghz and that's the bit everyone is confused about.

FR1 is like traditional cellular and will be slapped on cell towers to provide broad coverage over a wide area with performance characteristics similar to what we have today with LTE. It's not very exciting but it's 5G and this is what everyone is currently rolling out.

FR2 is meant to be absorbed, otherwise you'd have a big problem. Unlike FR1 which limits you to 100mhz bandwidth per channel, FR2 mandates that channel bandwidth be between 50-400mhz. So at a minimum, an FR2 channel will have half the maximum allowable bandwidth of FR1. If FR2 propagated more than a very short distance the airwaves would be quickly saturated by a small number of users.

FR2 is intended to be deployed in very dense areas like indoors. You'd be able to deploy many cell sites without worrying about overlap or signal propagation because everything from walls to moisture in the air will absorb the signals.

It might also be possible to slap an FR2 cell site on top of every lamp post going down a street.

spaceheretostay(4115) 4 days ago [-]

That's interesting - I'm working on using cell signal strength as an indicator for live weather features! There's a simple relationship between signal strength and clearness of weather - light rain has a distortion signal, heavier rain a heavier distortion, etc.

I'm betting that when all is said and done, the cell phones will help the weather forecast more than hurt it - but this may take some years if the cell companies are too greedy about it.

I'm working on detecting weather using all kinds of phone sensors like barometers and cameras in All Clear if you're interested: https://play.google.com/store/apps/details?id=com.allclearwe...

and the open source sensor package: https://github.com/JacobSheehy/AllClearSensorLibrary

JohnFen(10000) 4 days ago [-]

But that approach, at best, only tells you about the weather near the ground. That can be useful, but is woefully incomplete. You also need to be able to see weather high in the atmosphere.

RivieraKid(4029) 4 days ago [-]

How does it convert signal strength to clearness of weather, I assume you have a map of base signal strength in clear weather? How accurate it is compared to a weather radar?

samstave(3684) 4 days ago [-]

Is there a map of Cell Strength Detectors all over? (actual devices, not a predictive map)?

I think that its really interesting that water is literally the reason why life exists on this planet. Basically, water has shielded life from radiation from space, (in addition to the magnetosphere, etc) -- but life was able to evolve in the protective layer of water.

So, I just find the fact that propagating radiation measurement through water, such as you describe, and as it relates to the potential health concerns of Humans creating a new, fairly powerful and ubiquitous source of radiation (5G) to be something we really need to pay attention to.

qwerty456127(4095) 4 days ago [-]

A network of well-calibrated surface and marine weather stations and atmospheric probes is probably enough to produce reliable and precise weather forecasts in the today age of ML.

wyldfire(626) 4 days ago [-]

You can't calibrate away the RF noise introduced by transmitters that change frequently in time and space.

That's not to suggest that you cannot find a way to discriminate between the water vapor and the 5G transmissions, but you can't just take a sample on a low-humidity day and subtract that from new samples. If the metrics are below the new noise floor, merely throwing machine learning at the problem will not solve it.

plopz(10000) 4 days ago [-]

Panasonic was developing a weather model incorporating data from airplanes that was supposed to be really good. But I think most models are based on physics rather than ML.

xenonite(4048) 4 days ago [-]

Wouldn't it be possible to reuse the 3G frequencies with an updated technology in order to obtain higher bandwidths?

mrweasel(4010) 4 days ago [-]

It is. Denmark is allowing the phone companies to use their existing 2G/3G and 4G frequencies for 5G. The 700MHz spectrum is also being opened up for use with 5G.

I'm not sure if it will result in dramatically more bandwidth though.

OrgNet(4010) 4 days ago [-]

If it is possible, and those frequencies are worth using for this purpose, maybe they should give free 4G/5G-phones to anyone still on 3G to be able to do it as soon as possible.

keepmesmall(10000) 4 days ago [-]

Any assurances that this won't seriously disturb the earth's ecology and human health, or do we no longer bother with that when manipulating the whole planet?

joncrane(10000) 4 days ago [-]

Given the whole thing with Global Warming, I think this question already has an answer.

autocorr(2720) 4 days ago [-]

While 5G Will be a great boon, especially the beam-forming satellite version, another unintended consequence besides weather remote sensing is nuking the extremely important 24 GHz range (K Band) for radio astronomy. There are a few narrow protected windows for absolutely critical spectral lines, but the truth is that nature doesn't play by the spectrum allocations rules, and there are hundreds if not thousands of lines that are observed routinely outside of the protected bands. It is also remarkably free and clear of radio frequency interference (RFI), in part because industry has chosen other frequencies not attenuated by atmospheric water vapor. This isn't to say we should halt global human progress to save a local river bait fish, but that threat to forecasting is only one of the serious consequences major spectrum reallocation can have. This is especially true for passive use in the sciences, which has a weaker lobby than the private sector.

woah(3648) 4 days ago [-]

What does 5g have to do with satellites? It's a marketing push around some scheduled cellular equipment upgrades.

davengh(10000) 4 days ago [-]

>> save a local river bait fish

A few or more of those and we have real loss in biodiversity. Maybe your 'local river' can sustain that for a bit - but overall they are all important.

craftyguy(3116) 4 days ago [-]

As an amateur astronomer, I would greatly prefer preserving radio astronomy to allowing folks to have faster facebook crap streams on mobile devices.

N_trglctc_joe(10000) 4 days ago [-]

> While 5G Will be a great boon

This is something I've been having trouble with.

Lately I've become more aware of the secondary effects of 5G- on weather forecasting, on the radio spectrum, possibly on bees- and it's got me wondering why we need it for telecom. I just don't see the value added. I can already communicate with anyone in the world, access any information, and find my way anywhere with 4G. A significantly higher rate of data transfer just doesn't seem to add any new functionality to my phone. Can anyone give me a good rationale for 5G? Entertainment doesn't count.

I'll grant right-off-the-bat that it'll have some fantastic industrial applications; my issue is with personal telecom. It just feels like a new planned obsolescence vector.

dontbenebby(3995) 3 days ago [-]

Would it be possible to send up a space based telescope in those frequencies?

Maybe the economic benefit of 5G would be enough to justify a one time cost of a telescope launch, especially if US, EU, and Russia all pitch in.

watersb(10000) 3 days ago [-]

autocorr, do you know if the Jansky VLA WIDAR correlator will be able to deal with 5G?

A primary design goal was terrestrial signal rejection; we would get nuked by ABQ Traffic Control radar etc.

I haven't kept up. I suppose I should find out.

sandworm101(4017) 4 days ago [-]

If 5G is going to impact radio astronomy then the governments that license the spectrum should fund alternatives. Some simple space-based telescopes orbiting out beyond the 5G bubble would be expensive but not terribly difficult (radio telescopes, not the JWST). Put a couple out beyond the moon and the next image of a black hole won't be so blurry.

cm2187(3286) 4 days ago [-]

Stupid question. I was under the impression that one of the limits of 5g was that it was a short-distance signal, easily blocked by a wall or any obstacle. Is it really going to create interferences all the way to space? I thought satellites measured the temperature of the top of the atmosphere, not of stuff on the ground.

pas(10000) 4 days ago [-]

Typical signal transmission uses a a signal to noise floor of something like 20-40 dB ( https://documentation.meraki.com/MR/WiFi_Basics_and_Best_Pra... ) for high speed data transmission, but if you want to just get a big fat one or zero across, then you don't need that much. And antennas are really good at picking up resonant EM radiation, even if it's not the 'full signal'.

But these very sensitive weather sensors. And they already work by detecting a trend and then detecting a big blip over that. (So rain currently looks like some sort of interesting blip in the noise.) So, it might be possible to have cities mapped with a differing trend, but it would further complicate models. And currently over land there's not much noise, because humans don't use this part of the EM spectrum. Mostly because it's not great at long range, because attenuates very fast exactly due to water vapor in the air. So it was 'easy' to exploit this for getting weather data, because it was reasonable to assume close to constant natural emissions. (Probably only a simple daily and seasonal trend. Though it might be already necessary to handle differences between woodlands and urban areas.)

(See also, why it's hard to do the same over water: https://www.researchgate.net/publication/252663726_The_Effec... )

lgeorget(4080) 4 days ago [-]

They actually measure a lot of parameters, not only temperatures, and in all the layers of the atmosphere. And they're _very_ sensitive. For some forecast needs (short term forecast, storms, etc.), the conditions under the atmospheric boundary layer (https://en.wikipedia.org/wiki/Planetary_boundary_layer) is what matters most, so the microwave noise near the ground is definitely an issue.

tus87(10000) 4 days ago [-]

And nothing of great value was lost.

pas(10000) 4 days ago [-]

Hurricane path prediction seems truly a 'killer app', or do you think it's not?

cdaringe(4117) 4 days ago [-]

Rejecting on the grounds of no technical basis? I'd like to see more on that. I would hope when NASA raises a flag with the FCC it's taken with sincerity.

ghostly_s(10000) 4 days ago [-]

There are a lot of things one would hope the FCC would do that have not been happening under Ajit Pai. He's an industry shill, plain and simple.

pkghost(3677) 4 days ago [-]

Has anyone done a deep dive on 5g health concerns? E.g., 240-some scientists and 40 doctors signed a letter of discouragement (or something), claims research indicates 5g interacts with human biology in poorly understood ways: https://ehtrust.org/key-issues/cell-phoneswireless/5g-networ...

throwaway995669(10000) 4 days ago [-]

Unfortunately, as a society I think we're going to have to 'pee on the electric fence' for ourselves to find out.

Despite the fact that this spectrum has never been used for any widespread purpose, we're rolling it out and the burden is not on the implementers to prove that it is safe. It's basically on researchers to both prove, publicize, and convince society as a whole that 5G has health impacts.

I am not going to go all conspiracy-theory and say that the research is being suppressed but certainly funding for this research is not going to be a priority for the US government, as they've been thoroughly bought and paid for. Most research into health effects of non-ionizing radiation is not funded from the US government, so draw your own conclusions from that.

pas(10000) 4 days ago [-]

The 'good news' is that if it was serious, we would very likely already know. Therefore via some Bayesian inference we can claim that it's relatively harmless, but obviously worth keeping an eye on.

The bad news is that there are and will be entrenched interests that will likely try to 'work around' any health concern, and maybe, potentially we will hear that the existence of 5G is worth it. (For example the tech advantage helps with healthcare more than the radiation harms us.)

Dylan16807(10000) 4 days ago [-]

Some of those items on the list are sending up red flags to me. The weapon use is purely because it heats skin; if you apply a million times less energy for a million times as long, the danger level from heat is zero. When it can't penetrate skin, it does make sense to classify head skin exposure the same as foot skin exposure.

The thing about sweat glands is interesting. And sadly I have no idea how to evaluate the quality of the studies linked.

xxpor(3972) 4 days ago [-]

It's all nonsense. The higher frequencies are non-ionizing and in common use today. See all of those microwave antennas on buildings and towers? A lot of them transmit around there.

Your neighbor can blast 24 ghz right at your house with a free licence:

https://en.wikipedia.org/wiki/1.2-centimeter_band

The reason microwaves (like from a microwave oven) are dangerous are because of their power levels. Like, they'd cook you if they leaked out. Not because of ionizing radiation.

CamperBob2(10000) 4 days ago [-]

Yes, a fellow named Planck did, when he described the difference between ionizing and non-ionizing radiation. Any further questions, take them up with your high school physics teacher.

black-tea(10000) 3 days ago [-]

So far the comments suggest that no, nobody has. They are all of the form 'it must be OK or they wouldn't do it', ie. the phenomenon where adults think someone else is looking after them.

leroy_masochist(3944) 4 days ago [-]
roomey(4118) 4 days ago [-]

I think the main point of 5g keeps getting missed when people are asking about cell phones and their broadband speed vs capacity etc etc. The only reason telcos are going to put in 5g is for IOT coverage. Low powered trickle data from billions of devices.

Stuff for your personal cellular use would never come close to covering the costs involved. And 4g will still be used for many years to come for that.

yaantc(4014) 4 days ago [-]

The focus today for 5G is fixed wireless access and smartphones, not IoT.

5G covers 3 variants: 1) massive broadband (with mmWave in particular); 2) URLLC - Ultra Reliable and Low Latency Communications; 3) massive IoT. The current focus is on (1). Lot's of talk on (2), but nothing much concrete yet. (3) is moving along, but it's actually based on... LTE.

Strictly speaking, 5G is a set of requirements defined by the ITU-T, not a technology. The actual technologies are developed by another organization, 3GPP. And there are 2 technologies to cover 5G: 1) NR, or 'New Radio'. This is what most people mean by 5G. It's for massive broadband and URLLC; 2) LTE (release 15 and later) for massive IoT!

Yes, the IoT variant of LTE (LTE-M and NB-IoT) will be the 5G implementations for a while. Eventually there will be new NR versions for IoT, but nobody is in a hurry there. LTE-M and NB-IoT evolutions will be just fine for a long time, as far as massive IoT is concerned.

When you hear about 5G deployment today, it really mean 'NR' and not LTE based IoT. The concern for smartphone is really capacity during peak time in the busiest cells.

I work in the field BTW, as you may have guessed ;)

uncleberg(10000) 4 days ago [-]

No, most IoT devices in most applications are dramatically underpowered for 5G.

onion2k(2257) 4 days ago [-]

5G doesn't work well inside buildings without additional hardware. I doubt people will buy that for their house just so their toaster can display adverts.

NicoJuicy(372) 4 days ago [-]

Human fallacy, we all want it faster and more.

Bur for 99% of the cases, why would we need it?

IoT doesn't need 5G, it needs LiRa.

Streaming applications, I can stream with 4G.

50 ms latency with 4G, so what. Except for competitive multiplayer gaming perhaps, I don't see the issue. But I think they want everything wired ;)

Industrial applications, outside of IoT? Give me a valid example that needs countrywide coverage.

I hardly notice difference with 4G and my WiFi. Increase coverage for 4G, before implementing 5G.

Fyi: 4G offers maximum real-world download speeds up to 60Mbps. Currently, that is more than enough.

newusertoday(4087) 4 days ago [-]

you still need more capacity, as the population grows more number of devices will compete for same 4g bandwidth and unlike optical fibre you cannot increase the bandwidth by adding new wire.

cloakandswagger(10000) 4 days ago [-]

> Industrial applications, outside of IoT? Give me a valid example that needs countrywide coverage.

Police CAD systems, streaming video from body cameras, oil well monitoring.

Just because you can't imagine an application of 5G for your consumer needs doesn't mean it isn't needed.

pas(10000) 4 days ago [-]

What is LiRa?

ryanmarsh(4031) 4 days ago [-]

Slightly OT or meta. I keep bumping up against these nutty conspiracy theories about 5G being dangerous in various forums. Has anyone done a study of the effects of certain frequencies and energy levels on the human body that I can use to refute these fools? Also, what is the canonical source on 5g spectrum and power levels?

pas(10000) 4 days ago [-]

https://arxiv.org/ftp/arxiv/papers/1503/1503.05944.pdf this basically found that 90% of the energy is absorbed on/at/around the skin.

if neurons were directly exposed they would heat up and their firing rate would be altered significantly [look at fig2 / D ] ( https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4233276/ )

this study did a direct exposure testing on a human's arm (and on rats and monkeys) at 94GHz for 3s at 1W/cm^2: https://apps.dtic.mil/dtic/tr/fulltext/u2/a628296.pdf

this did 24h low power 1mW/cm^2 exposure of human eye like cells and detected no significant difference in micronucleus expression, DNA strand breaks, and heat stress protein expression: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4997488/

this used 42GHz on rats for 30min/day for 3 days, and tried to look for tumors/cancer and found no significant difference: https://www.rrjournal.org/doi/abs/10.1667/RR3121

so. it seems there's no immediate and known adverse effect (other than the heating and maybe some interference with the electro-biochemistry of cells, particularly with neurons).

and luckily people usually don't use phones near their heads anymore (when doing large data transfers), so we don't really have to worry about concentrated absorption on the head due to being at best a centimeter from the emitter.

> Also, what is the canonical source on 5g spectrum and power levels?

maximum permissible exposure is defined as 10W/m^2 power density (for 6GHz between 100GHz) by the FCC.





Historical Discussions: "I worked at Boeing for about 1.5 years in the 2008-9 time period" (April 16, 2019: 630 points)

(631) "I worked at Boeing for about 1.5 years in the 2008-9 time period"

631 points 4 days ago by thereare5lights in 3995th position

www.reddit.com | Estimated reading time – 5 minutes | comments | anchor

I worked at Boeing for about 1.5 years in the 2008-9 time period and I can absolutely guarantee this happened.

First, Boeing's corporate culture is the worst shitshow I have ever experienced. All large corporations have a lot of internal issues and problems but nothing like the Lazy B. It was like working in a company designed by Kafka. I signed up at Boeing as a programmer. When I showed up at my first day of work, the first words out of my supervisor's mouth were, 'I don't know why you are here, we have no need for programmers.' (The Boeing interview process is done so that at no point, do you ever have contact or communication with the team you will be working with.)

So, basically, I was cutting and pasting cells in Excel spreadsheets and doing ad hoc project management during my time there. They did have need for a programmer, but I didn't have access to install any programming software on my machine because no one knew who the local IT person was. No one. It was a year before I was able to figure that out and only because I was bored one day and was walking around the building and found the guy's cubicle by accident.

To be fair, the aging aircraft division that I was in was notoriously bad, even for Boeing. It was where they put people that the union wouldn't let Boeing fire. I would conservatively estimate 30% of my co-workers were full-blown sociopaths who would actively work to sabotage and ruin other people's work. Another 50% of the people there blatantly goofed off all day, reading the newspaper or books with their feet up on their desks (literally). The remaining 20% were people who actually cared about airplane passengers not dying and worked themselves half to death to keep things afloat. I'll give a quick shout out to Anastasia, James and all the contract workers who actually did their jobs. There are probably a few thousand people around the world who aren't dead because of you.

Anyhow, James (or was it Jim? It's been a while.) was a grouchy old engineer they stuck me next to. He was close to retirement and clearly wasn't too stoked about losing half his cubicle to an unwanted programmer that showed up one day. James had a bunch of photos of an old 747 and structural diagrams pinned to his cubicle wall. One day, I asked what those were.

They were pictures and failure analysis diagrams of JAL 123, the single worst single airplane disaster in history. 520 people died. It was because a couple of Boeing engineers fucked up. That 747SR had had a tailstrike incident on takeoff that damaged the rear pressure dome. A team of Boeing AOG (Airplane On the Ground) mechanics were flown out there to fix it. To oversimplify, they rushed and accidentally did the equivalent of 1+1=1 on one of their stress calculations. It was an error very similar to the infamous Hyatt Regency walkway collapse. 12,318 flights later, (well before what should have been at least 25-30,000 flight cycles that the crack inspection cycle would have assumed) the rear bullkhead ripped out mid flight and severed all hydraulic control lines. The plane lost all control and flew in a rollercoaster trajectory for 32 minutes before running into the side of a mountain. Many of the passengers had time to write goodbye letters to their loved ones. James had those photos and diagrams on his cubicle so that every day, he could look at them and remind himself of why his job was important and why he couldn't cut corners.

James was clearly an incredibly knowledgeable and talented engineer. He was the widely acknowledged expert in the entire department. If any other engineer had a question, they would always come to him for advice. So why was such a good engineer relegated to a department full of fuckups and malcontents? Because he wouldn't cut corners on safety.

This was the final stages of the 787 rollout, which was behind schedule and full of issues. James had constantly raised red flags about safety corners Boeing was cutting on the 787 rollout. Things like putting the plane out before there was a good understanding of crack propagation speed, nondestructive testing protocols and repair protocols for all the carbon fiber on the plane. These were extremely serious issues that Boeing swept under the rug to get the 787 out faster. Because he wouldn't toe the line on this, James got exiled to the shitty little backwater I ran into him at where he was counting the days until he could retire and spend his time SCUBA diving out at Edmonds.

To this day, I refuse to fly on a 787. I'm sure that the Dreamliners that came off the assembly line after about a year or so were fine but there's that first year of production that, as far as I'm concerned, are ticking time bombs. I talked to many engineers who had worked on that program to know just how badly they rushed that initial production.

So, as far as I'm concerned, fuck Boeing. This was inevitable. I'm honestly shocked it took this long for something like this to happen.




All Comments: [-] | anchor

Pigo(10000) 4 days ago [-]

I contracted with a branch of Lockheed. While I couldn't/shouldn't talk about any of the technology involved, I can attest to the culture I witnessed. I always assumed it was horrible because of the government affiliation. But I think that mostly affected their focus, in that most of the branch was dedicated to wasting money so they could ask for more money. However, the culture itself was stagnate by design. Most of the employees actively fought anything that would require them to learn or change. EVERYONE was counting the clock to retirement. Anyone who wasn't in the same boat left as soon as they got the big picture. It made me extremely pessimistic about taxes.

itronitron(4113) 4 days ago [-]

I've worked on both excellent teams and dysfunctional teams doing contract work for the government. The distinction to be made is size and longevity of the programs involved. For various reasons, large long lasting programs have accumulated rules that prevent anyone from doing anything productive.

decoyworker(10000) 4 days ago [-]

You hit the nail on the head with regards to the culture in this industry. This is my experience as well.

For an engineering oriented industry you have an alarming number of engineers with an aversion to learning anything new. As much as incoming / lower level software engineers complain about tools, management / senior engineers just go ahead and choose the shittiest IBM or Oracle product for the job because some vocal graybeard minority in a meeting will complain it's not Clearcase.

pornel(3305) 4 days ago [-]

How come we end up with ineffective duopolies? Why isn't market competition finding a way around these failing companies?

tarabanga(4119) 4 days ago [-]

> wasting money so they could ask for more money.

When one strongly encourages government doing business with contractors over hiring govt FTEs, it should lead to (1) all technical duties being offloaded to contractors, (2) loss of govt FTEs with technical expertise, (3) inability to hire replacement govt FTEs because of stigma and budget priorities, (4) govt FTEs taking on contract administration duties, effectively administering the production and maintenance of things they know nothing of and have no interest/investment in, who are working with (5) businesses (private corps) whose legal duty is to make profit, employing management, sales, and technical people (contractors' employees) whose primary priorities are their own careers.

(6) is a bonus. If you get to hire a technical govt FTE somehow, s/he will start identifying the inefficiencies and irregularities of the contractors' practices. That means his/her manager and colleagues will feel threatened or fear retribution (for contract mismanagement), pushing the new FTE out in one way (demotivation) or another (bureaucratic and/or political games).

No one is evil or lazy in this dynamic. It's more simply the unintended consequences of public policy choices, which have slowly became the basic tenets of the culture of societies implementing those choices and quickly creating very sturdy social structures and roles.

dmonagha(10000) 4 days ago [-]

I worked at a similar large aerospace contractor just in the past couple years. I can attest 100% to what you've said.

They estimated they would need to replace 40% of their workforce within the next 10 years just to stay afloat. Recently at my old site they hired 1000 people. Within a year, 800 had quit. These are software engineers, mechanical engineers, etc. I watched people just play on their phones, keep their feet up, basically do nothing at all.

One guy showed up to work on the first day and was told by his manager : 'I don't have an office, so you'll have to sit here for now'. The manager then flew back to another state, and the guy did not see him ever again. For a year this guy played Candy Crush on an iPad and did nothing because no one else knew who he was or who he worked for. Eventually he got a new manager and his job was then unboxing computers...for a year...When I started working with him, he would go into a large empty lab and just lay down behind some boxes and nap for 2hrs a day.

He was hired to be some sort of cost accountant.

This was not uncommon, it was rampant. I still cannot believe workplaces like that exist.

jet_32951(10000) 4 days ago [-]

At the last place I worked, Fortune 50 systems integrator, you can find yourself in a program where risk is increasing and schedule is galloping to the right, yet people are getting shuffled to other projects in the classic matrix organization move to 'save the budget'. Maybe this is because higher management is gambling that another component of the program is going to be late and therefore there's no pressure anyhow, but this doesn't get communicated downward and there's jerky and inconsistent progress for reasons that don't otherwise make sense. After a time, people get tired of the idea that they are pawns in a financial game and whether they do good, mediocre or bad work isn't important or perhaps even relevant to company goals.

jcadam(4102) 4 days ago [-]

I worked at Boeing on a large satellite program back in this same time frame (2008-2011). I worked on the software simulator for the satellite that was used to test flight code and the ground system (i.e., give the ground control software simulated satellites to talk to, and to ring out procedures).

Anyway, as the final configuration for this satellite was still being tweaked, I needed to get updated mass properties (in order to simulate the physics properly) from the team working on the real satellite as the configuration changed (e.g., they added more batteries, solar panels, decreased RCS tank size, etc.). Ordinarily, these would be emailed to me in an excel spreadsheet every so often. I would make the updates, life would go on.

Now, internally, the simulator software worked with metric units, and the spreadsheet I received would also use metric units. Apparently, one day an engineering manager on the vehicle team found out that one of their engineers had been 'helping' the simulator team by plugging the mass properties into an excel spreadsheet, translating the units from imperial to metric, and sending them to me (I did not know any of this, of course... I just knew the vehicle team would send me updated mass properties from time to time).

This was an outrageous affront to said manager, who ordered his folks to not expend any time helping 'some other team.' So, the next time I needed updated mass properties, what did I get? A faxed copy of something that looked like it had been generated on an old line printer. I called and asked 'Where is the spreadsheet?' and got 'Sorry, that's all I can do anymore.'

Some of the numbers were questionably legible, but I tried to use it anyway. As I was making my updates, I noticed the numbers were way off. Units weren't labeled on the fuzzy faxed copy, so it took me a few minutes to realize that the vehicle engineering team apparently worked with imperial units internally.

Angry phone calls back and forth ensued, but I don't recall the (political) issue ever being fixed. I didn't stay much longer, so I don't know if it was ever resolved.

rajbot(10000) 4 days ago [-]

I'm surprised there wasn't a push to standardize units after this incident: https://en.wikipedia.org/wiki/Mars_Climate_Orbiter

okmokmz(10000) 4 days ago [-]

I'm currently working at Boeing, and had a very similar experience to him getting onboarded as a contractor on a major project. Just as an example, I'm going to outline the steps it took to get me a proper badge

1. Filled out badge request form and gave it to my manager who gave it to the person who would be my manager at boeing

2. 4 months later have to refill out form because they lost it

3. 1.5 months later I receive badge, but it doesn't have the chip I require

4. Get told by manager to go back to badge office and get chip, but apparently the form was submitted by someone who actually isn't my manager but is listed as my manager and they selected 'no chip' so they can't reissue until they get a new form

5. My manager submits new form requesting chip, go back to badge office and get told I can't get a chipped card because he isn't technically my manager so they need to change my assigned manager first

6. Actual manager attempts to contact the person boeing says is my manager multiple times, after about 3 weeks they hear back from them

7. Manager finally changed so he sends email to badge office saying that he is now my manager and I need the badge

8. Go back to badge office and they can't give me chipped badge because he submitted the form before he was my manager and the email isn't sufficient.

9. He submits new form, go to badge office and they finally give me chipped badge

And mind you, everyone is so lazy that it takes multiple days minimum for every communication with anyone. So essentially Boeing payed me for an extended amount of time where I couldn't really do anything because no one cared to get me what I needed to do my job. Also, that was only the first piece. I am still working on getting everything else I need, and it doesn't look like it's going to happen anytime soon.

Another thing I hate is everything costs a ton of money and is garbage. No free coffee, no free tea, no free snack, no free cups, no free stirring sticks, and the cafeteria food and options they do have are way over priced and worse than dog food. The majority of the staff are old, sad white men that don't seem to want to do anything.

Oh, and on top of everything the project is 100% in the cloud but the out of touch manager requires everyone to be onsite

By far the worst company I have ever had the displeasure of being contracted at. Can't wait to be done

edit: thought of another wonderful example, I once heard a member of the 'security' team say that their job is to just say no to everything that comes their way

rargramble(10000) 4 days ago [-]

> Old, sad white men

Would it be better if they were old, sad black men?

silvermast(10000) 4 days ago [-]

Dude we had totally different experiences. When I got my grey badge, I walked in, gave them my forms I was told via email they would need, was then told to wait 3 hours and left to go hang out at the lake. Sunbathed for 2 hours. Came back. Got my badge. No issues.

My team was very underutilized... We got legitimately excited when we came in and there was real work for us to do instead of sitting around and being 'available'.

lordnacho(4103) 4 days ago [-]

My view of aircraft engineering has certainly changed since the 737 stuff came out. You always get told software engineers are a bunch of cowboys compared to aircraft people, because they have to put everything through a bunch of rigourous tests.

I asked a friend who trains commercial pilots about it, and to my surprise he told me the processes to get things certified are political and hand-wavy.

carlmr(10000) 4 days ago [-]

>the processes to get things certified are political and hand-wavy

The big processes that make everything less cowboy inevitably lead to the build-up of people that only do politics and processes and are far removed from actually desigining code.

gmueckl(3743) 4 days ago [-]

I haven't worked for aviation yet, but from what I have seen in other areas that need certification processes, the system is setup in a way that invites gaming it. In Europe, most certifications are performed by competing private companies. That cannot possibly go wrong and erode standards, right?

atemerev(3234) 4 days ago [-]

The problem is not "cowboy attitude" or "bureaucratic swamp"; these are two sides of the same coin, process inefficiency promotes corner-cutting.

The problem is lack of care. They don't even seem to care about the money, judging from their current stock prices.

lawlessone(10000) 4 days ago [-]

I know software is bad.

But using the same airframe from the 60's and just strapping bigger engines to it until you have to move the nacelle and alter flight characteristics seems like a cowboy move also.

TheCondor(10000) 4 days ago [-]

I think there were some stories about this from Challenger. The cost, be it financial, political, or other was so great to get a part "space rated" that they'd use redundant bad parts to cope with errors in some cases. I vaguely remember this being used to describe a temperature sensor that was wrong like 1% of the time, they used a group of them and some algorithm to filter out error.

Can you think of a more geek sexy project than working on a rocket, space craft or jet? Then the reality is that it has to be about the least satisfying engineering job around, it looks like you have to make a titanic effort just to do something

mgoia(10000) 4 days ago [-]

>This was inevitable.

Inflammable means flammable? What a country!

chrisseaton(2974) 4 days ago [-]

'Evitable' means 'avoidable'. 'Inevitable' means 'not avoidable'. They mean the opposite, which is what you'd expect.

It's nothing like 'inflammable' and 'flammable', which mean the same thing. Those two have different roots which is why there is confusion.

JohnFen(10000) 4 days ago [-]

'When I showed up at my first day of work, the first words out of my supervisor's mouth were, 'I don't know why you are here, we have no need for programmers.''

Wow. If that's not a huge red flag that you shouldn't work there, I don't know what one is. The only appropriate response to that, I think, is to say 'OK, then, it was nice to meet you. I quit.'

SkyMarshal(2327) 4 days ago [-]

Another appropriate response:

"That's odd, you build major products that sell for hundreds of millions of dollars, that tens of thousands of peoples' lives literally depend on, and that are completely controlled by software, fly-by-wire, etc. And yet you have no need for programmers?"

curiousDog(4106) 4 days ago [-]

Unpopular opinion, but had this happened at a company that hired H1-Bs, people would've jumped on the incompetent immigrants band-wagon immediately.

This here is a fine example of the classic American work ethic devolving into sheer laziness. FWIW, I saw this happen at another Military contractor (Northrop Grumman) too!

codingslave(10000) 4 days ago [-]

Boeing is full of H1Bs...

atemerev(3234) 4 days ago [-]

There definitely are, or at least were, H-1Bs at Boeing. In the beginning of my career, I nearly became one myself (this was at the time when Russia and the US were on much friendlier terms), but I decided to steer into computational finance instead.

Not all H-1Bs are incompetent or lazy. I still have good memories of my team back in Russia — some people there did contractor work for Boeing (documentation management software), and a few moved to the US via H-1Bs. They were competent professionals, I'd vouch for them anytime.

There are no guarantees for all contractors, of course.

masonic(2371) 4 days ago [-]

  had this happened at a company that hired H1-Bs
But they do, so I don't understand what point you seek to make.

https://h1bdata.info/index.php?em=THE+BOEING+COMPANY

nostrademons(1625) 4 days ago [-]

Sometimes I wonder what'll happen if the United States ever has to fight a war against an opponent whose industrial base is not 150 years behind it.

Then again, I've heard Russian military equipment is in just as sorry a state. No idea what the Chinese military is like, but I'd imagine the real money is in selling to Americans rather than killing them.

We're in this interesting place where it's entirely possible that the best military equipment is actually in the hands of private, ostensibly civilian companies. SpaceX has ballistic missiles with a CEP significantly smaller than a drone ship, for example, a feat that was unheard of during the Cold War. For that matter, the very concept of a drone ship would've been awfully helpful during the Battle of the Atlantic.

anonfounder747(4108) 4 days ago [-]

It would be presumptuous to think 'this is the real Boeing' based on anecdotes from people who worked there. Even in big companies, where a lot of folks are twiddling thumbs, there are undoubtedly teams where real quality work gets done. These teams are somewhat insulated from the bureaucratic environment around them and attract true performers.

RealityVoid(4116) 4 days ago [-]

Do you think that the teams are insular or the people who know what they are doing are insular?

Because in my experience the people that do the heavy lifting are just randomly distributed and are bright spots in a otherwise gray landscape.

koolba(675) 4 days ago [-]

> To this day, I refuse to fly on a 787. I'm sure that the Dreamliners that came off the assembly line after about a year or so were fine but there's that first year of production that, as far as I'm concerned, are ticking time bombs. I talked to many engineers who had worked on that program to know just how badly they rushed that initial production.

Ouch. Anybody know where to get a list of who's operating the planes from that first year of deliveries?

Do airlines publish the serial numbers for upcoming flights? ie is it possible to check the delivery order of the plane you're potentially flying on?

jnsaff2(4024) 4 days ago [-]

https://www.planespotters.net/production-list/Boeing/787 rest should be easy.

https://www.reddit.com/r/videos/comments/bdfqm4/the_real_rea... gives some background and estimates the first 'safe' plane to be SN 27.

anotheryou(4060) 3 days ago [-]

yes, it's possible, but only on a very short notice e.g. via https://www.flightradar24.com/airport/phl/departures

I think there was a way to find the info without paying premium, but I forgot what of these names to put on to google to get more info on the specific plane.

ndmrcf(4116) 4 days ago [-]

Slightly unrelated but I've built https://myplane.info/ to quickly check if my plane is still the same before the flight. AFAIK, in case of a plane change than the scheduled, we have right to get a full refund* with most of the airlines. I'm thinking about adding a notification feature on that.

*Edit: Upon my findings, full refund case above is not available with all airlines, but rebooking option being offered instead.

dx034(3554) 4 days ago [-]

Sometimes, online flight trackers estimate the registration prior to the flight and they usually have the serial number as well (otherwise you can look it up on airfleets, plane spotters etc). For some flights it's easy to estimate the registration yourself (if the only plane available is a current inbound flight).

But generally, you won't get that info more than a day in advance. Airlines won't even know it more than a week in advance, they only know that it'll be one of a pool of planes of the same type.

theNJR(3971) 4 days ago [-]

Yah that gave me shivers. I'm flying a 787 in June.

llcoolv(10000) 4 days ago [-]

Well a decades-long oligopoly of two companies is similar to central planning in many aspects and his story indeed rings a lot of bells.

dosy(2893) 4 days ago [-]

Poor Boeing, I guess they're putting all their best people and resources into off-world spacecraft.

matt4077(1176) 4 days ago [-]

In the last 50 years, fatalities per miles travelled have been reduced by a factor of 100.

If that's an oligopoly at work, I want to try a monopoly and see if it's even better.

lazyjones(4036) 4 days ago [-]

It's good that the C919 and MC-21 have arrived to compete then. I just don't see how the catastrophic MCAS design could have been avoided with more competition...

tomohawk(1896) 4 days ago [-]

Its interesting how an unknown source making unverifiable claims on a site which is basically a rumor mill gets so much attention.

spookthesunset(10000) 4 days ago [-]

this happens all the time on reddit and it definitely worries me a bit. People post what is, for all I know, complete fiction and people seemingly take it as fact. Point out that it could be made up and watch those downvotes fly...

gruez(3620) 4 days ago [-]

It's not, if you think about it. People believe what they want to hear. Unverified post agreeing with the popular narrative gets immediately accepted as fact, and posts disagreeing with the narrative gets accused of being shills or damage control.

I'm not saying this guy is a fraud, but if there's no supporting evidence, there's no evidence to believe him.

okmokmz(10000) 4 days ago [-]

Have you read through the other comments here and on reddit? Many people have had similarly bad experiences

hacknat(3957) 4 days ago [-]

I lived in Seattle for 7 years and got to know some Boeing project managers. The stories they told are very similar to this one. Apparently the life of a Boeing PM or Engineer is endless meetings, I asked one of my friends why this was the case. His response? A good chunk of the managers at Boeing won't use email.

pier25(3403) 4 days ago [-]

Not being able to use modern forms of communication is one of the biggest problems where I work now. I'm in a remote team and we are completely isolated to whatever happens in the company.

Most departments only get reached by management when there is a problem.

It's a relatively small company of about 50 employees and there is zero sense of team work.

philpem(4114) 4 days ago [-]

I wouldn't be surprised if this was a liability thing.

You can't subpoena a conversation unless it was recorded...

jedberg(2209) 4 days ago [-]

Interesting. I was hanging out at a wedding with someone right around that time frame who was one of the lead engineers on the 787 rollout. He too had similar comments about safety. He told us that the 787 was failing it's FAA tests, but since the head of Boeing's relationship with the FAA was a former FAA tester, he 'helped' the FAA rewrite the tests so that the 787 would pass.

I told him he may want to consider being a whistleblower, but his response was 'well the AirBus didn't pass their FAA test either until they rewrote the tests'.

I still look at the overall safety numbers and realize that flying is still safer than driving, but it sounds like it could be safer still.

ActorNightly(10000) 4 days ago [-]

The difference is that driving is mostly under your control, while flying is not.

socrates1998(4093) 4 days ago [-]

Wow, good to know. I won't be going anywhere near a 787.

spookthesunset(10000) 4 days ago [-]

Good thing the poster of the story is totally verified legit and not at all making things up for fun or profit.

chippy(631) 4 days ago [-]

'I worked at Boeing for about 1.5 years in the 2008-9 time period and I can absolutely guarantee this happened.'

was in response to this comment:

'I would be very surprised if in a few years from today a bunch of engineers don't testify that ample of warning was given to management about this. '

Reddit doesn't make it easy to find what they replied to.

executesorder66(3521) 4 days ago [-]

Yes it does. Just click on the link labeled 'parent' at the bottom of any comment.

cesarb(3307) 4 days ago [-]

> Reddit doesn't make it easy to find what they replied to.

The trick is to add ?context=3 to the end of the URL: https://www.reddit.com/r/videos/comments/bdfqm4/the_real_rea... (or for the better 'old' interface: https://old.reddit.com/r/videos/comments/bdfqm4/the_real_rea...)

okmokmz(10000) 4 days ago [-]

>Reddit doesn't make it easy to find what they replied to.

Just click parent

j_4(4102) 4 days ago [-]

On the old layout there's a 'parent' link under every comment.

If you're not logged in, you can just add 'old.' at the start of the url.

navigatesol(10000) 4 days ago [-]

>'I would be very surprised if in a few years from today a bunch of engineers don't testify that ample of warning was given to management about this. '

I love this mentality that 'managers' are the only people responsible, and that the people actually putting the thing together are not. It's the same with the bank scandals, or the VW scandal, or the Facebook scams; it's all the executive team! We were just peons following orders! We aren't responsible!

If you see a major safety issue or other concern, and report it to your superior, who does nothing, you're pretty weak if you shrug your shoulders and say, 'I did my job!'

basetop(10000) 4 days ago [-]

You can thank their redesign.

Try using 'old.reddit.com' while it last. And stop using reddit once that gets taken down.

Reddit was once a great site. Now it's a corporatized, censored mess that sold its soul for short-term cashout in their 2020 IPO.

spectramax(10000) 4 days ago [-]

Btw, the new 'fix' for the MCAS has just been announced and discussed by a 40-year experienced pilot [1]. Turns out that while we shouldn't be doing armchair analysis and assuming the worst of the engineering, management and executive team; the software fix released appears to be rather elementary and MCAS should have been designed with these safety checks in the first place. Why it wasn't is a huge concern and we wonder what other systems are at risk.

In summary, the software fix does the following:

1. Use inputs from both AOA sensors, if they disagree by 5.5 degrees, disable MCAS. Original MCAS system used only 1 AOA sensor and switched back and forth between the two after every flight.

2. Triple redundant filters, A) Average value reasonability filter B) Catastrophic failure low-to-high transition filter C) Left vs. Right AOA deviation filter

3. Limiting MCAS stab trim so that the elevator always can provide 1.2g of nose-up pitch authority for recovery. Furthermore, electric trim with the yoke switch will override MCAS.

Turns out that armchair analysis was sort of on point and goes along with the incompetency at Boeing affirmed in this reddit post - sometimes, I wonder how we, humans, collectively build extraordinary monuments while we individually rest on stilts.

[1] https://youtu.be/zGM0V7zEKEQ?t=370

morpheuskafka(10000) 4 days ago [-]

I know very little about aircraft, but wouldn't making the MCAS less powerful increase the risk of a stall? How do we know that there would be a net decrease in crashes?

linuxftw(10000) 4 days ago [-]

> 1. Use inputs from both AOA sensors, if they disagree by 5.5 degrees, disable MCAS

This can only mean that MCAS is disabled even more frequently. This is a terrible idea. This is like RAID0.

The failure mode will increase greatly in complexity. MCAS might cut in and out, the plane might behave erratically.

What the need to do is add a damn switch for 'disable MCAS' and tell people how and when to use the damn thing.

thatoneuser(10000) 4 days ago [-]

Why do you say we shouldn't do armchair analysis right before you acknowledge that said analysis was on point? I think it's good our populace is eager to analyze. Ultimately we aren't making the decision, just forming a branch of decentralized checks. Long as witch hunts like the Boston bomber don't happen I say people should analyze till their hearts are content.

koenigdavidmj(3246) 4 days ago [-]

I thought that the flight immediately before this one had MCAS trouble too. Were both sensors busted, or was that actually an even number of cycles previous to the crash, and only the previous day?

ptidhomme(10000) 4 days ago [-]

> I wonder how we, humans, collectively build extraordinary monuments while we individually rest on stilts

My take is that we used to have brilliant engineers in the time where computational ressources were scarce (60s, 70 , 80s...), and they thought out their design thoroughly starting with a fucking paper and pencil. Aerospace indutries still rely on the designs from these times.

But now engineering seems to be : let's ask my computing power what is the good design, with little further questioning.

agumonkey(929) 4 days ago [-]

I don't get it, Boeing seemed to have a positive slope, with their plane outselling the competition. Fragile system design is hard to understand in this context. But maybe MCAS were conceived at a time where Boeing was more willing to cut corners.

usrusr(10000) 4 days ago [-]

I wonder if the new iteration will also get an intermediate disable state, where you keep powered trim but shut out all automatic inputs. Defense in depth.

garaetjjte(3720) 4 days ago [-]

>Furthermore, electric trim with the yoke switch will override MCAS.

Does that mean it didn't override MCAS before? Really?

ozmaverick72(10000) 4 days ago [-]

The software fixes as described sound sensible and it's hard to believe they were not in the original design. Unfortunately I don't see how these changes will allow them to keep a common type rating with the 737 NG. After the updated MCAS has made its one shot trim adjustment you will now be flying an aircraft with different handling characteristics if you keep increasing the angle of attack into the stall. Surely the FAA can't sign off on the common type rating and half hour of video training ??

ilogik(10000) 4 days ago [-]

> Original MCAS system used only 1 AOA sensor and switched back and forth between the two after every flight.

Who the fuck thought that was a good idea???

Let's make it 10x more difficult to detect that there is a problem.

tus87(10000) 4 days ago [-]

> 1. Use inputs from both AOA sensors, if they disagree by 5.5 degrees, disable MCAS.

Great, and now you have the other problem the MCAS was supposed to mitigate (the tendency of the MAX to do backflips).

starpilot(2906) 4 days ago [-]

In his other comments, he walks back on his remarks:

> Also, in all fairness to my hated former employer, air travel is still by far the safest form of travel. Even with the shitshow at Boeing, Boeing planes manage to be incredibly safe. I'm really not sure how, but they are.

> And it might be because I grew up around Boeing, but I'd still fly Boeing over Airbus, to be perfectly honest. Airbus makes good planes, but that reliance on computers over pilots just makes me nervous.

> Even the 787, despite all the horrible issues with the initial run, is probably going to be an exceptionally safe plane due to the carbon fiber construction.

So it sounds like much of his commentary is of the 'how the sausage is made' nature that is common to the gestation of many (all?) complex products. You probably have nothing to worry about with air travel.

AWildC182(10000) 4 days ago [-]

I have some experience with carbon composite construction. His statement about the 787 initial production is eerily similar to other stuff I've seen. I'm not saying it will end poorly, but let's just say we don't know nearly as much about how carbon fails/ages as we do about aluminum. Carbon is an amazing material, but I've seen a lot of cases where during initial production people realized they needed extra material here and there, and various process improvements to prevent delamination and eventual catastrophic failure. It may take a few more years, but I wouldn't be surprised if those initial aircraft find their service lives to be much shorter than promised. Unfortunately we may never be made aware, however.

pas(10000) 1 day ago [-]

> You probably have nothing to worry about with air travel.

Well, the conclusion is already made in the statistics, and air travel is the safest, yes, but it's worrying that it's safe despite the enormous fallacies, despite the insane pressure of market forces.

A very eye-opening thing is to watch what the pilots should have done: https://www.youtube.com/watch?v=xixM_cwSLcQ&t=16m55s

Runaway stabilizer scenario has associated memory items (meaning the pilot must have the checklist committed to memory). There are big wheels that turn, with noise and a big visibility marking.

This was Boeing's initial defense too, that the pilots fucked up, because disabling any electronics works the same way. (There's a cutoff switch.) So if something is doing something, shut off the power, rotate the wheels manually.

And in this regard Boeing is right. Pilots decisionmaking was suboptimal. However, it is Boeing's fault that they opted for a no-training-required strategy.

A similar WTF is that pilots don't keep up with the aviation industry news. How - the fuck - can you pilot a plane that you know nothing about (except that your license is valid for it)!?





Historical Discussions: Want to learn a new skill? Take some short breaks (April 14, 2019: 591 points)

(591) Want to learn a new skill? Take some short breaks

591 points 6 days ago by occamschainsaw in 2191st position

www.ninds.nih.gov | Estimated reading time – 5 minutes | comments | anchor

NIH study suggests our brains may use short rest periods to strengthen memories

In a study of healthy volunteers, National Institutes of Health researchers found that our brains may solidify the memories of new skills we just practiced a few seconds earlier by taking a short rest. The results highlight the critically important role rest may play in learning.

"Everyone thinks you need to 'practice, practice, practice' when learning something new. Instead, we found that resting, early and often, may be just as critical to learning as practice," said Leonardo G. Cohen, M.D., Ph.D., senior investigator at NIH's National Institute of Neurological Disorders and Stroke and a senior author of the paper published in the journal Current Biology. "Our ultimate hope is that the results of our experiments will help patients recover from the paralyzing effects caused by strokes and other neurological injuries by informing the strategies they use to 'relearn' lost skills."

The study was led by Marlene Bönstrup, M.D., a postdoctoral fellow in Dr. Cohen's lab. Like many scientists, she held the general belief that our brains needed long periods of rest, such as a good night's sleep, to strengthen the memories formed while practicing a newly learned skill. But after looking at brain waves recorded from healthy volunteers in learning and memory experiments at the NIH Clinical Center, she started to question the idea.

The waves were recorded from right-handed volunteers with a highly sensitive scanning technique called magnetoencephalography. The subjects sat in a chair facing a computer screen and under a long cone-shaped brain scanning cap. The experiment began when they were shown a series of numbers on a screen and asked to type the numbers as many times as possible with their left hands for 10 seconds; take a 10 second break; and then repeat this trial cycle of alternating practice and rest 35 more times. This strategy is typically used to reduce any complications that could arise from fatigue or other factors.

As expected, the volunteers' speed at which they correctly typed the numbers improved dramatically during the first few trials and then leveled off around the 11th cycle. When Dr. Bönstrup looked at the volunteers' brain waves she observed something interesting.

"I noticed that participants' brain waves seemed to change much more during the rest periods than during the typing sessions," said Dr. Bönstrup. "This gave me the idea to look much more closely for when learning was actually happening. Was it during practice or rest?"

By reanalyzing the data, she and her colleagues made two key findings. First, they found that the volunteers' performance improved primarily during the short rests, and not during typing. The improvements made during the rest periods added up to the overall gains the volunteers made that day. Moreover, these gains were much greater than the ones seen after the volunteers returned the next day to try again, suggesting that the early breaks played as critical a role in learning as the practicing itself.

Second, by looking at the brain waves, Dr. Bönstrup found activity patterns that suggested the volunteers' brains were consolidating, or solidifying, memories during the rest periods. Specifically, they found that the changes in the size of brain waves, called beta rhythms, correlated with the improvements the volunteers made during the rests.

Further analysis suggested that the changes in beta oscillations primarily happened in the right hemispheres of the volunteers' brains and along neural networks connecting the frontal and parietal lobes that are known to help control the planning of movements. These changes only happened during the breaks and were the only brain wave patterns that correlated with performance.

"Our results suggest that it may be important to optimize the timing and configuration of rest intervals when implementing rehabilitative treatments in stroke patients or when learning to play the piano in normal volunteers," said Dr. Cohen. "Whether these results apply to other forms of learning and memory formation remains an open question."

Dr. Cohen's team plans to explore, in greater detail, the role of these early resting periods in learning and memory.

Article:

Bönstrup et al., A Rapid Form of Offline Consolidation in Skill Learning. Current Biology, March 28, 2019 DOI: 10.1016/j.cub.2019.02.049

This study was supported by NINDS' Intramural Research Program and the German National Academy of Sciences Leopoldina (LPDS 2016-01).

For more information:

National Institute of Neurological Disorders and Stroke

Division of Intramural Research

Stroke Information Page

Stroke Hope Through Research

NIH Clinical Center

###

NINDS is the nation's leading funder of research on the brain and nervous system. The mission of NINDS is to seek fundamental knowledge about the brain and nervous system and to use that knowledge to reduce the burden of neurological disease.

About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit https://www.nih.gov.




All Comments: [-] | anchor

dorkwood(4059) 5 days ago [-]

In my experience, this effect is most noticeable while learning an instrument.

Practice until you start to see diminishing returns, take a short break, and then return to practicing. You'll notice you're slightly better than you were before taking a break.

PinkMilkshake(10000) 5 days ago [-]

I really notice this during video games. As soon as I feel stuck/frustrated I try stop, take a 15 minute break, then quite often nail it on the first retry.

trillic(10000) 5 days ago [-]

I drink tons of water, I try and drink my 32 oz bottle once an hour when I'm working. I feel more productive for two reasons because I do this: 1. I'm not dehydrated, 2. it forces me to get up to use the restroom a lot more than usual. This forces me to have lots of mini breaks almost every hour or so to walk and fill up my bottle and use the restroom.

astonex(10000) 5 days ago [-]

That seems like way too much water and surely bad for your kidneys.

hanniabu(3828) 5 days ago [-]

I have never gone to the doctor for this, but I'm pretty sure I have ADD and funnily enough I think I am able to concentrate way better when I'm dehydrated. Being on an empty stomach also seems to help. Anybody know if there's an scientific reason for this?

chimpburger(10000) 5 days ago [-]

Dangerous. You can die from water intoxication. It is an extremely painful way to die. Brain swells and is crushed by your skull https://en.wikipedia.org/wiki/Water_intoxication

ramraj07(10000) 5 days ago [-]

Yep let's overwork our kidneys to learn the guitar

edit: seriously though I'm pretty sure that's not good for you. That sounds actually being in the ballpark of water poisoning.

Someone1234(4109) 5 days ago [-]

You may wish to go see your doctor, that doesn't seem normal (excess fluid consumption can be a sign of diseases like diabetes). And more worrying you're bordering on water intoxication at 32 oz/hour:

> Your kidneys can eliminate about 5.3-7.4 gallons (20-28 liters) of water a day, but they can't get rid of more than 27-33 ounces (0.8-1.0 liters) per hour.

https://www.medicalnewstoday.com/articles/318619.php

Also, you don't need 8 glass of water a day, that is a myth, you should listen to your body[1].

[1] https://www.nytimes.com/2015/08/25/upshot/no-you-do-not-have...

farahday(10000) 5 days ago [-]

> I try and drink my 32 oz bottle once an hour when I'm working.

Seriously, how do you get any work done? Thats like a sip every 30 seconds.

aiisjustanif(10000) 5 days ago [-]

32oz per hour?!?! Holy crap man, you are literally a human fountain.

arkades(4095) 5 days ago [-]

The "rest periods" were ten seconds long. This doesn't generalize into anything meaningful; when was the last time you focused on a new skill so intensely that you didn't have ten second pauses?

JesseAldridge(3774) 5 days ago [-]

Yes but the work periods were also ten seconds. So the suggestion seems to be that we should spend 50% of our time resting, with many very small rest periods interspersed between many very small work periods.

rexpop(4070) 5 days ago [-]

Basically every time I practice my horn. 10 seconds is a long time when I'm trying to crank through an exercise book.

The thought of holding off, even closing my eyes for 10 seconds, is pretty alien to my practice style.

fersc(10000) 5 days ago [-]

Everything else considered, the closing remarks of the the press release would be useful to keep in mind if you wanted to develop this technique further

> "Our results suggest that it may be important to optimize the timing and configuration of rest intervals when implementing rehabilitative treatments in stroke patients or when learning to play the piano in normal volunteers," said Dr. Cohen. "Whether these results apply to other forms of learning and memory formation remains an open question."

mrcoder111(10000) 5 days ago [-]

Sounds good, the problem is in social settings (at work) if you take a break i.e. play a game during work hours people think you're lazy.

loco5niner(10000) 5 days ago [-]

I think playing a game during the break (I assume video game?) would be counterproductive.

It is not at all the same as taking a 'resting' break, as described in the article.

I actually have some experience here, as I have for several years played the same video game on my phone every time I go on break at work and, anecdotally, actually feel it has not helped me in this area.

I've recently decided to avoid the game except on my longer lunch breaks, and attempt to have 'restful' breaks instead. We shall see how it goes :-)

echelon(4108) 5 days ago [-]

This seems to fit with the spaced repetition learning curve. By default, Anki (spaced repetition software) will repeatedly show you new facts on a minute-long interval. Once you've got it, that interval becomes ten minutes, and then a day.

This stuff works. I've used it to study Japanese and Chinese with great success.

I'd love to see more studies as to why. If we understand the biochemistry, perhaps we can enhance it.

pitt1980(3971) 5 days ago [-]

Yeah, I was wondering that too,

How does muscle memory work?

Does space out activity work the same way that spacing out studying work?

Especially since muscle fatigue is a real limit on athletic training.

I wonder what could be done trying to optimize sports practice schedules around this idea.

spookybones(10000) 5 days ago [-]

As a Japanese learner who would also like to learn Korean, what do you constitute great success? Are you conversational in each? Can you read in each language? I find Anki pretty boring, but want to like it. After studying the basics, I've had better success with extensive reading.

lanewinfield(10000) 5 days ago [-]

With both this article and the propranolol one, it sure seems that resting (both short breaks and full sleeps) really help a lot.

Wonder why every CEO's boasting 4 hour nightly sleeping schedules?

andy_ppp(4017) 5 days ago [-]

Lies and testosterone? It also allows you to do more work that you already know how to do, but makes any new work frustrating and difficult.

acura(10000) 5 days ago [-]

Isn't there people that actually needs less sleep?

And needing less sleep could be an explanation to why they reach these levels?

randomacct3847(2536) 5 days ago [-]

I think the key is sleep...

mkl(4047) 5 days ago [-]

The article is about research that shows the key isn't sleep.

whatshisface(10000) 5 days ago [-]

Are you telling me that checking Hackernews every couple hours is going to pay off?

>take a 10 second break

I guess that's the next browser plugin, instead of blocking Reddit/HN/Facebook/Twitter, limit them to 10 second bursts.

meowface(4104) 5 days ago [-]

>I guess that's the next browser plugin, instead of blocking Reddit/HN/Facebook/Twitter, limit them to 10 second bursts.

No clue if this would actually be beneficial for me, but I'd love to try an extension like this.

6nf(10000) 5 days ago [-]

My slow internet already takes 10 seconds to load a new site

codyb(3959) 5 days ago [-]

I have a feeling breaks where you're doing something relatively mindless (walking around the block, washing a dish, taking a shower, etc) would allow the mind to "breathe" and "digest" a bit better than a break on social media or the news.

All conjecture but it plays well for me when I'm using pomodoro to get up, move, get water, do a dish, look out the window; as opposed to when I check reddit or hacker news for whatever new information has come out in the last two hours or whatever.

pkghost(3677) 5 days ago [-]

A couple of anecdotes:

I went to a group rhythm workshop in San Francisco called TaKeTiNa wherein a group of 50 of us learned, over the course of a few hours, to perform a stomp/clap polyrhythm (that is, a sequence that is really the combination of two sequences with different time signatures) that I wouldn't have guessed we could learn in a single session. The facilitator guided us through by starting with an approachable subunit of the pattern, added to it piece by piece over a few minutes until we fell apart, and then gave us a few minutes of laying-down closed-eye rest time. When we came back, the previous segment seemed relatively easy, and we moved into further complexity. By the end, I was both exhausted and impressed at how much we had learned.

I played lacrosse in high school (West coast, believe it or not). Several times we were in a rut a few days before a big game, and our coach would cancel the intervening practices. We'd show up to the game and be astonished at how well we played. (I realize this is a very different time scale of effect, and is perhaps better explained by higher level psychological factors rather than a lower level neurological/memory-formation mechanism, but, then again, maybe it applies at multiple scales.)

zeropnc(10000) 5 days ago [-]

San Francisco really is a different planet

tjbiddle(10000) 5 days ago [-]

This is something I've found true in my own personal experimentation over the past few years. Any time I've been training something, if I take a break from it, I come back and I have more skill than I had previously.

Recently this occurred with my handstand practice. It's something that I've been working on the past year or two on and off, but more heavily recently. I've made some great strides, but the past week or two I've had a number of things distract me from my normal practice.

Jumping back into it this past week, I've found my balance and strength is or order of magnitude better.

This is only a single anecdote - but I've felt it rings true every time.

Iv(10000) 5 days ago [-]

What I noticed a few years ago when I decided to bring myself up to date again on machine learning after a ten years break (and what ten years that was for the domain!) I read a ton and watched several online classes.

I realized that the pace was very different from the classes I had been during my student years: classes were boring, so I had time to think about stuff.

The rhythm that I found worked really well for such information-dense subjects was 1:1 breaks. One hour of classes. One hour to think about it (usually I would go for a walk)

Internet taught us to drink from a firehose, but our brain needs some time to process the information. It can't accumulate information and digest it at the same time.

You just learned about drop-out layers or the drawbacks of softmax? Don't feel bad about switching off the computer and think about it. No one is judging that you are 'doing nothing'.

I remember that on my last corporate job, I used to go for a break/walk when stuck on a nasty bug and would often come back with the solution. My colleagues frowned a bit upon that but luckily my boss, a former researcher, totally approved of the method.

projektir(10000) 5 days ago [-]

I have observed similar things and so far the relationship continues to bewilder me. At times I return with a better ability for something I haven't touched in years, while when I was actively training it, improvement was slow. Perhaps some cross-pollination from another source, but still strange.

It's... not always encouraging, as at times it's very hard to say what has improved it and whether practice is all that useful.

z3t4(3839) 5 days ago [-]

I couln't access the article. Is it balance related ? When I learned rope walking, I first practiced for about two months but could only take a few steps. Then I took two months off, and when I tried again I could magically walk the whole rope!

debt(3985) 5 days ago [-]

There's a phrase "sleep on it"

rorykoehler(3462) 5 days ago [-]

When I used to practice competitive sports I spent a lot of time doing positive visualisation. This involved (track & field sprinting) visualising the timing of my foot strike, visualising of the stride motion and power transfers, (mtbing) day dreaming about railing difficult lines on my favourite tracks including visualising the bike physics (especially in the wet). Though it's only anecdotal and there were no doubt other genetic factors at play I excelled at a rate much ahead of my peers. I have tried the same with more cerebral tasks and whilst it definitely works I find I need to spend more time immersing myself in the practice with these types of tasks.

hanniabu(3828) 5 days ago [-]

Speaking of handstands, I've been wanting to get started with that too as an item on my skills-to-acquire list. Any tips for a fellow beginner? I could never seem to get past the point needed to balance. I kick my body back and it stop right before the balance point and my body slowly goes off center and comes crashing down. Have you found it easier to do with trying to keep your body straight and stiff or feet tucked in a bit? Is it easier to learn with using your head against the ground or going for it with your arms straight? Any supplemental exercises or methods that you've found have helped?

steveeq1(3696) 5 days ago [-]

I've noticed this with guitar as well.

kbutler(4112) 5 days ago [-]

Note that this article is specifically about short breaks (10s) vs overnight or longer breaks.

random_kris(4092) 5 days ago [-]

This has happend to me with video games. Been really trying to get good at fortnite in februrary/march and I actually got quite good but hit some kind of a mindblock... Then due to busy life I couldn't play for a month. Got sick 3 days ago and been staying home and playing it and I am noticably better than I was a month ago. So maybe alternating in playing for a 14 days and then taking a break for few days could do some real good

Razengan(3967) 5 days ago [-]

> Any time I've been training something, if I take a break from it, I come back and I have more skill than I had previously.

I've experienced this first hand, almost like a tangible thing, while learning a language.

Whenever I came back to it after a break, sometimes after months of no exposure to the language, I was sometimes surprised to find myself being able to understand some new dialogue without looking at subtitles etc.

agumonkey(929) 5 days ago [-]

I'm a self taught jack of all trades. What you describe is ultra common in music. I used to obsess for hours on exercises and quickly got nowhere. Whereas not doing anything for a week and going back to the instrument, everything was in place without even trying. Super odd at first.

Recently I've been trying to learn electrochemistry and electromechanics, by copying youtube videos, which is different from doing it in real life (the consequences can be lethal at times). And I've noticed an amplified version of the music pause-improvement. When I'm stuck on a project, 6 monthes later I wake up one morning and feel 1) confident, 2) motivated, 3) sure about some ideas I didn't really see before

Again, nothing but time away from the task and again, very odd.

Got me thinking about progress and time. You can clearly see the steps taken by people or groups in lifting up their lives. You do a bit, live this way, wait, and one day you make a new step. There's a natural rhythm. Except for the illuminated that can skip gaps over the average person of course.

resoluteteeth(10000) 5 days ago [-]

This may be true, but the article is talking about 10 second breaks so it's a little bit different from what you're describing.

lemming(1145) 5 days ago [-]

I've also been working on handstands a lot recently, and I've found this very hit and miss. Sometimes I will come back and be much better than previously, and sometimes I will totally suck after a break. There doesn't seem to be much middle ground there.

I've found handbalancing in general to be very frustrating in that sometimes it will go really well, and other times I will totally suck, and there's no obvious reason why. i.e. I slept ok, I ate breakfast, I'm not particularly stressed or unfocused, but it just won't work at all. And then the next day will be fine.

ramblerman(3111) 5 days ago [-]

The best piano teacher I ever had, left me with a little trick like this.

When I was working through a song, getting the next few bars in my fingers lets say. If I got it right without mistake I had to stop immediately. lift my hands of the keyboard, and reward myself with a little breath - and tell my subconscious that was it.

Before that I would just practice the loop over and over.

dalbasal(10000) 5 days ago [-]

Sounds curiously like dog/animal training. The training session ends on completion of a task (eg fetch), followed by some sort of reward (food, play...).

Part of the art is recognizing when the animal isn't into it anymore. If that happens, and you go into another repetition, the last rep is frustrating and/or unsuccessful and it's hard to end on a satisfying success... which is counterproductive. Much better to skip that last rep and get the optimal reinforcement.

It occurs to me that whether training an animal or yourself, this needs to be a conscious plan. It's more natural to continue when things go well and quit when they stop.





Historical Discussions: Mozilla WebThings (April 18, 2019: 583 points)
Mozilla WebThings (April 18, 2019: 11 points)

(586) Mozilla WebThings

586 points 1 day ago by sohkamyung in 31st position

hacks.mozilla.org | Estimated reading time – 6 minutes | comments | anchor

The Mozilla IoT team is excited to announce that after two years of development and seven quarterly software updates that have generated significant interest from the developer & maker community, Project Things is graduating from its early experimental phase and from now on will be known as Mozilla WebThings.

Mozilla's mission is to "ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent."

The Mozilla IoT team's mission is to create a Web of Things implementation which embodies those values and helps drive IoT standards for security, privacy and interoperability.

Mozilla WebThings is an open platform for monitoring and controlling devices over the web, including:

  • WebThings Gateway – a software distribution for smart home gateways focused on privacy, security and interoperability
  • WebThings Framework – a collection of reusable software components to help developers build their own web things

We look forward to a future in which Mozilla WebThings software is installed on commercial products that can provide consumers with a trusted agent for their "smart", connected home.

WebThings Gateway 0.8

The WebThings Gateway 0.8 release is available to download from today. If you have an existing Things Gateway it should have automatically updated itself. This latest release includes new features which allow you to privately log data from all your smart home devices, a new alarms capability and a new network settings UI.

Logs

Have you ever wanted to know how many times the door was opened and closed while you were out? Are you curious about energy consumption of appliances plugged into your smart plugs? With the new logs features you can privately log data from all your smart home devices and visualise that data using interactive graphs.

In order to enable the new logging features go to the main menu ➡ Settings ➡ Experiments and enable the "Logs" option.

You'll then see the Logs option in the main menu. From there you can click the "+" button to choose a device property to log, including how long to retain the data.

The time series plots can be viewed by hour, day, or week, and a scroll bar lets users scroll back through time. This feature is still experimental, but viewing these logs will help you understand the kinds of data your smart home devices are collecting and think about how much of that data you are comfortable sharing with others via third party services.

Note: If booting WebThings Gateway from an SD card on a Raspberry Pi, please be aware that logging large amounts of data to the SD card may make the card wear out more quickly!

Alarms

Home safety and security are among the big potential benefits of smart home systems. If one of your "dumb" alarms is triggered while you are at work, how will you know? Even if someone in the vicinity hears it, will they take action? Do they know who to call? WebThings Gateway 0.8 provides a new alarms capability for devices like smoke alarms, carbon monoxide alarms or burglar alarms.

This means you can now check whether an alarm is currently active, and configure rules to notify you if an alarm is triggered while you're away from home.

Network Settings

In previous releases, moving your gateway from one wireless network to another when the previous Wi-Fi access point was still active could not be done without console access and command line changes directly on the Raspberry Pi. With the 0.8 release, it is now possible to re-configure your gateway's network settings from the web interface. These new settings can be found under Settings ➡ Network.

You can either configure the Ethernet port (with a dynamic or static IP address) or re-scan available wireless networks and change the Wi-Fi access point that the gateway is connected to.

WebThings Gateway for Wireless Routers

We're also excited to share that we've been working on a new OpenWrt-based build of WebThings Gateway, aimed at consumer wireless routers. This version of WebThings Gateway will be able to act as a wifi access point itself, rather than just connect to an existing wireless network as a client.

This is the beginning of a new phase of development of our gateway software, as it evolves into a software distribution for consumer wireless routers. Look out for further announcements in the coming weeks.

Online Documentation

Along with a refresh of the Mozilla IoT website, we have made a start on some online user & developer documentation for the WebThings Gateway and WebThings Framework. If you'd like to contribute to this documentation you can do so via GitHub.

Thank you for all the contributions we've received so far from our wonderful Mozilla IoT community. We look forward to this new and exciting phase of the project!

Full time UK-based Mozillian, working on the Web of Things.

More articles by Ben Francis...




All Comments: [-] | anchor

ousta(10000) 1 day ago [-]

it seems to be mostly for domotics right? what about IOT for other things like medical devices for instance that would be a good area to open source

kgiori(10000) 1 day ago [-]

the W3C Web of Things framework is agnostic to use cases and devices. Mozilla's WebThings framework can work in medical, industrial, etc. It's just that Mozilla's focus is to put people first. (Plus it's easiest to invite makers and community dev when contributors can use it themselves at home). All the code is open source -- I've already seen some industrial companies pick it up and customize for their needs.

aphextron(2626) 1 day ago [-]

My first thought seeing this headline: Let me guess, another move from Mozilla to start gathering more personal data. I wish they'd proven me wrong.

kgiori(10000) 1 day ago [-]

the WebThings gateway is an opportunity for Mozilla to help protect privacy and security, not copy the approach of industry silo's where devices connect to the cloud. your gateway data stays local.

walid(3722) 1 day ago [-]

'another move from Mozilla to start gathering'

What do you mean by 'another move' here? Mozilla isn't gathering anything!

bionicbits(10000) 1 day ago [-]

I thought Mozilla was pro-privacy with all the features they have added to the browser. Have I been missing something? I recently came back to Firefox for their stance on privacy.

chucksmash(3928) 1 day ago [-]

> Have you ever wanted to know how many times the door was opened and closed while you were out? Are you curious about energy consumption of appliances plugged into your smart plugs? With the new logs features you can _privately_ log data from all your smart home devices and visualise that data using interactive graphs.

> Note: If booting WebThings Gateway from an SD card on a Raspberry Pi, please be aware that logging large amounts of data to the SD card may make the card wear out more quickly!

bedosh(10000) 1 day ago [-]

How is Mozilla gathering personal data from people using WebThings? When I tried WebThings Gateway a few months back, it did not seem to collect/send personal data off site. It allows you to create a public endpoint as a subdomain of mozilla-iot.org for easy off site access (https://github.com/mozilla-iot/wiki/wiki/Gateway-Remote-Acce...) which would allow some tracking I assume, but in general I have noticed no hidden data collection as your comment implies.

igetspam(10000) 1 day ago [-]

Have you played with it at all? Mozilla isn't acting as a data broker, they're providing software and a framework. The world of iot doesn't have that many great offerings for private systems. Mozilla is taking their years of software experience and building something decent. I'm glad to see this hasn't been shuttered yet. I've been wanting to move more of my home automation away from cloud based services and this has a lot of potential. (The current best offering I know of is hass)

arthurcolle(3208) 1 day ago [-]

Sorry for the aside but what CMS does Mozilla use for these publications? Just out of curiosity

callahad(2762) 1 day ago [-]

The Hacks blog is powered by WordPress.

Khalos(10000) 1 day ago [-]

Looking at the source, it looks like they're just using WordPress.

wayneftw(10000) about 21 hours ago [-]

Are there any Mozilla projects that have any sort of market share and make money or are they mostly just living off that sweet Google cash for their ~10% Firefox market share?

Any chance Google stops funding them soon?

(Wow, a question was asked. Better censor it Mozillans!)

sciurus(306) about 21 hours ago [-]

This is a fair question, albeit off topic.

You can see Mozilla's latest annual report report at https://www.mozilla.org/en-US/foundation/annualreport/2017/

And the latest public financial statement at https://assets.mozilla.net/annualreport/2017/mozilla-fdn-201...

(Disclosure: I work for Mozilla, but not on WebThings or producing those reports)

scriptkiddy(4111) about 18 hours ago [-]

> Are there any Mozilla projects that have any sort of market share and make money or are they mostly just living off that sweet Google cash for their ~10% Firefox market share?

Is that a relevant question considering that Mozilla Foundation is a non-profit organization?

> Any chance Google stops funding them soon?

Probably, but that won't stop them from doing what they're currently doing. Also, Google doesn't 'fund' them so much as pay them for search integration.

> (Wow, a question was asked. Better censor it Mozillans!)

The question was not presented in good faith. In fact, it's clear that you weren't actually asking a question at all. You were stating your opinions sarcastically in the form of a question.

robotron(10000) about 20 hours ago [-]

You also asked a question with a lot of editorializing in it. I wouldn't call downvotes censorship.

dccoolgai(3391) 1 day ago [-]

My first reaction is 'wtf? What about WebBluetooth?' That is an open standard that supports tinkering, integrates with Permissions api, etc. And it's been withering on the vine with no help from Moz... this looks like a poor alternative, tbh, unless I'm missing something.

kevingadd(3681) 1 day ago [-]

What does Web Bluetooth have to do with IoT? Bluetooth is a short range, low-bandwidth protocol for 1:1 pairing of devices, isn't it?

That aside, Google's approach to introducing experimental APIs like Web Audio, Web USB, Web Bluetooth etc has been... less than ideal. Having been involved in it, I don't think it's surprising that other vendors are not interested in helping.

pagutierrezn(3920) 1 day ago [-]

I expected to see MQTT as an integral part of the WebThings Gateway but haven't found it so far

IshKebab(10000) 1 day ago [-]

MQTT is kind of crap so I'm not surprised.

carmate383(4113) 1 day ago [-]

C'mon Mozilla, please. Less projects that will end up like Firefox OS, more work on _Firefox_.

Vinnl(516) 1 day ago [-]

The phone OS market was already pretty entrenched when they started on Firefox OS. They're focusing on IoT now exactly to prevent it from ending up like Firefox OS, and to prevent them from ending up in a Firefox Desktop-like situation, where its relevancy keeps declining both because the relevance of desktop is declining, and because people do not use Firefox on mobile and thus cannot sync.

Fnoord(3868) about 22 hours ago [-]

What kind of work on Firefox are you referring to?

marcosdumay(4117) about 18 hours ago [-]

Because developers are interchangeable gears that can work on any kind of project without any bump, and can be stacked into any number on a shared project without any issue.

snek(4050) 1 day ago [-]

If you somehow feel that firefox is moving too slowly (check the vcs history its very active) you can always contribute yourself, it's OSS :)

metildaa(10000) 1 day ago [-]

Firefox OS has been rebranded as KaiOS since Mozilla stopped working on it, and it has been gaining significant market share since then. A $5 LTE bar or flip phone based on a Qualcomm 205 with 128MB of ram isn't able to run Android (especially not in a fast manner), but it will run KaiOS just fine.

danso(4) 1 day ago [-]

Seriously? IoT is a quickly growing, potentially massive and ubiquitous field. Given the shit state of security and technical implementation of its initial era, in very glad that a well-resourced non-profit is helping to create an open standard.

the8472(4065) 1 day ago [-]

Does it use multicast discovery?

rzr(4115) 1 day ago [-]

yes mDNS

canada_dry(4057) 1 day ago [-]

Took a quick breeze though the intro and maybe missed it...

Where's the love for IoT DIY/tinkerer gadgets though? i.e. Arduino, Beagle bone, ESP8266/32?

mintplant(1578) 1 day ago [-]

Check the 'Arduino' tabs on the WebThings Framework homepage [0]. See also the Supported Hardware list [1], which includes ESP8266, ESP32, and Raspberry Pi (among many others).

[0] https://iot.mozilla.org/framework/

[1] https://github.com/mozilla-iot/wiki/wiki/Supported-Hardware

lousken(10000) 1 day ago [-]

Would this be compatible with Turris Omnia?

kgiori(10000) 1 day ago [-]

That's one of the targets for OpenWrt support, yes.

giancarlostoro(3283) 1 day ago [-]

This is the Mozilla I love. I wish they did more projects like this, it's a shame Firefox OS went away, especially in this world of PWAs.

I do love that this dashboard can even be ran on your router. Also the web UI looks really clean.

metildaa(10000) 1 day ago [-]

Firefox OS has been rebranded as KaiOS, and its eating marketshare with $5 LTE bar and flip phones. Said devices are pretty minimally specced with a Qualcomm 205 and 128MB of Ram, but Mozilla did a great job optimizing for low end devices.

nessup(3934) 1 day ago [-]

This is amazing. Finally Mozilla is sticking its nose into a growing market that REALLY needs their help!

I wonder how much of this will be Rust? :)

SimeVidas(3841) about 20 hours ago [-]

Check out Mozilla Labs: https://labs.mozilla.org/. They're working on all kinds of things.

rident(10000) about 21 hours ago [-]

It would be cool to see a reboot of Firefox OS with Rust and WASM some day.

hathawsh(10000) 1 day ago [-]

Whoa, at the end of the article there's an announcement about 'WebThings Gateway for Wireless Routers'. Is this the beginning of apps for routers? I've talked about the idea before:

https://news.ycombinator.com/item?id=18887403

I'm excited to hear more.

flipper_c(10000) 1 day ago [-]

That was what I found most interesting too. But then I'm a hardware geek!

paavoova(10000) 1 day ago [-]

What exactly differentiates 'apps' here from regular programs you can install on routers? If your router runs a hosted system such as Linux and isn't locked down (e.g. OpenWRT), you can install/compile for any existing program that supports the architecture and system. Installing tor as you in linked post is just 'opkg install tor'. A lot of packages have webgui frontends, too.

Edit: I've read the article and Mozilla even mentions reskinning/basing off of OpenWRT. Which only puzzles me even more given you write 'the beginning of apps for routers' as if OpenWRT/pfSense/etc doesn't exist.

doctorpangloss(3979) 1 day ago [-]

Docker for NAS devices is the de facto App Store. The most compelling applications are for media consumption / piracy.

josteink(3498) 1 day ago [-]

> We're also excited to share that we've been working on a new OpenWrt-based build of WebThings Gateway, aimed at consumer wireless routers.

I'm impressed. Almost sold, even.

All I'm left wondering about is device-support.

How does this compare to (for instance) Home Assistant?

xiaomai(4115) about 17 hours ago [-]

I've been using Things gateway for probably a year or so now (I'm just using z-wave devices, but Things supports lots of other types as well). The main thing I see missing that most home automators will want is support for locks/garage door openers/etc. The interior stuff like lights/alarms/plugs/sensors/etc. is all great.

A lot of emphasis is on their Web of Things protocol that looks really cool and would be fun to play around with if you wanted to add your own home-made device (or hopefully more vendors will be adopting that protocol soon).

VectorLock(10000) 1 day ago [-]

What products support WebThings right now?

kbumsik(1894) 1 day ago [-]

Here is the maintained list of supported hardware:

https://github.com/mozilla-iot/wiki/wiki/Supported-Hardware

askvictor(4102) 1 day ago [-]

This looks like a competitor to homeassistant/Hass?

anyzen(10000) 1 day ago [-]

Yes, it looks this way to me too. Interesting that they didn't join forces.





Historical Discussions: Amazon 'flooded by fake five-star reviews' – report (April 15, 2019: 579 points)

(579) Amazon 'flooded by fake five-star reviews' – report

579 points 4 days ago by drugme in 3229th position

www.bbc.com | Estimated reading time – 3 minutes | comments | anchor

Image copyright Getty Images

Online retail giant Amazon's website is flooded with fake five-star reviews for products from unfamiliar brands, consumer group Which? has claimed.

Household names were largely absent from top-rated reviews on popular items such as headphones, smart watches and fitness trackers, it concluded.

Thousands of reviews were unverified, meaning there was no evidence the reviewer bought the product, it said.

Amazon said it was using automated technology to weed out false reviews.

It said it invested 'significant resources' to protect its review system 'because we know customers value the insights and experiences shared by fellow shoppers'.

'Even one inauthentic review is one too many,' it added.

But Which?'s probe suggested fake reviews were commonplace.

When it searched for headphones, it found all the products on the first page of results were from unknown brands - which it defines as ones its experts have never heard of - rather than known brands, which it defines as household names.

Of 12,000 reviews for these, the majority (87%) were from unverified purchases.

One example, a set of headphones by an unknown brand called Celebrat, had 439 reviews, all of which were five-star, unverified and were posted on the same day, suggesting they had been automated.

Celebrat could not be reached for comment.


Image copyright Getty Images

How to spot a fake review

  • Do not rely on ratings - delve deeper and read the reviews
  • Check the dates - look at when the reviews were posted. If many of them were posted in a short time period, it's likely they have been computer generated and are fake
  • Filter reviews to remove unverified reviews. Only reviews marked as verified are those that Amazon can confirm were purchased on its website
  • If products have hundreds or thousands of largely positive reviews be wary

Source: Which?


ReviewMeta, a US-based website that analyses online reviews, said it was shocked at the scale of the unverified reviews, saying they were 'obvious and easy to prevent'.

The popularity of online review sites mean they are increasingly relied on by both businesses and their customers, with the government's Competition and Markets Authority estimating such reviews potentially influence £23bn of UK customer spending every year.

Which? says its findings mean that customers should take reviews with 'a pinch of salt'.

'Look to independent and trustworthy sources when researching a purchase,' says Which? head of home products Natalie Hitchins.


Do you write fake reviews on Amazon? Or have you fallen foul of a fake review? Email [email protected]

Please include a contact number if you are willing to speak to a BBC journalist. You can also contact us in the following ways:




All Comments: [-] | anchor

d0m(3889) 4 days ago [-]

When buying on Amazon, I'm mostly interested in the 2-4 range reviews; I find this is where people discuss the pros/cons instead of just the pros (5 - best product ever!1!) and just the worst (1 - it had a defect because I threw it in the pool even though it said don't throw it in the pool and the company doesn't want to reimburse me, never buying from it again yadayada - kind of reviews).

jay_kyburz(4093) 4 days ago [-]

Note to self: When writing fake reviews, just make them 4 stars.

tenaciousDaniel(10000) 4 days ago [-]

Forgive me for being naive, but isn't this an easy problem to solve? Amazon tracks shipments, I assume. Couldn't they just create a mechanism where a completed shipment gives the purchaser one allowance for adding a review? This way you can only review if Amazon has verified that you bought the product.

I must be missing something crucial here.

baroffoos(10000) 4 days ago [-]

How it typically works is the company requests that you buy the product and tells you they will refund your purchase if you leave a 5 star review. On Amazons side it looks like these people really did buy the product. Fake reviewers get paid in free stuff and the company just uses their marketing budget to give stuff away.

avhon1(10000) 4 days ago [-]

It's reasonable to review a product on amazon that you bought somewhere else.

tlrobinson(355) 4 days ago [-]

Why even bother allowing unverified purchase reviews?

rlayton2(10000) 4 days ago [-]

I think there can be lots of value in them, but only in the long-form review, where the product is described and evaluated. Aggregating the numbers makes no sense.

dingaling(3980) 4 days ago [-]

That really only works for high-demand, current products.

Back when I used to be engaged with Amazom I wrote a lot of reviews for out-of-print secondhand books that I borrowed from libraries or bought from specialist shops. I was usually quite harsh with my reviews but was often the only reviewer and usually the book was out of stock at the time.

People seemed to appreciate the long-tail reviews and I was up in the top 100 UK 'most helpful' reviewer list.

If I had only reviewed books I'd bought on Amazon I would have written about 10% as many reviews.

cptskippy(10000) 4 days ago [-]

I received an email at least once a month, to an address I used exclusively for Amazon purchases, inviting me to join a website that will reimburse me via PayPal if I purchase products and review them.

I have forwarded the emails to Amazon a couple times explaining that my email address is used exclusively for them making it easier to narrow down who might be sending these emails. They always respond with a warning that my account might be suspended if I partake in such sites.

helloindia(10000) 4 days ago [-]

There are public facebook groups, where sellers buy Amazon reviews. https://outline.com/qTE2TY

luckylion(10000) 4 days ago [-]

Same here. Cryptic alias on my own domain that I never use for anything else. It's rare though, I get one spam email to that address once every few months now, and it's always shopping/review-related. I've contacted Amazon to ask how, when and with whom they shared my email address and their response was basically 'we don't ever, but here's a month of free prime for your trouble'.

harry8(4097) 4 days ago [-]

Blog it with any specific info you want to protect redacted. Give newspapers and equity analysts a chance to find what you are saying backed by evidence. This is the only way $BIGCO will ever care. Do you care enough to do it. I probably wouldn't.

burlesona(3981) 4 days ago [-]

I can't pinpoint exactly when it happened, but at some point in the last year or so I completely lost trust in purchasing goods on Amazon.

I've been a Prime customer since Prime launched. I loved it for a decade. It used to be an automatic reflex for me: need something? Type it in Amazon and click buy.

But now I don't trust Amazon search results at all, and when I do purchase I only do it via direct product links from other sites I trust (like Wirecutter or the manufacturer's site). Increasingly I buy direct from the brands websites.

I wanted to drop Prime this year. My wife argued we should keep it because the kids watch a lot of Prime Video. But we've already got Netflix, and with Disney launching their thing I think I'd rather buy that than stick with Prime any longer.

Not sure where Amazon is headed, and I wish the fate of AWS and Whole Foods weren't at least somewhat tied up in the fate of Amazon's retail operation.

Theodores(4085) 4 days ago [-]

Find Lock Picking Lawyer on YouTube and his reviews of the locks Amazon recommend. See him open them in seconds. Share videos with wife. She will be with you on cutting the umbilical cord to Amazon after that.

The locks they recommend in 'Amazon Choice' have known vulnerabilities that design solutions were found for many decades ago so the products are essentially naive. If Amazon deliberately set out to hype the most useless locks so you would have your stuff stolen (and have to buy more from Amazon) then they would struggle to do a better job.

Of course the locks come with hundreds of five star reviews even though they can be opened in seconds with low skill attacks.

istjohn(10000) 4 days ago [-]

AWS is wildly profitable for Amazon. I don't think you need to worry about AWS going anywhere. It's the goose that lays the golden eggs.

crankylinuxuser(4053) 4 days ago [-]

News at 11.

Seriously? Go hang out at Amazon for a hot second. You'll see:

     1. Fraudulent products
     2. Fake products
     3. Ripoff obvious clones
     4. Fake stores
     5. Fake reviews (5 star AND good 1 star like the bot got mixed up)
     6. Bad of the above mixed in the supply of the 'good' legit products
     7. Amazon Piss bottles (0)
Amazon is now, what Walmart was 10 years ago - a scourge and a horror to work at for any length of time. It's no surprise that a company that was to sell off even more underlying ethics would be walmart at their own game.(1) In that, Walmart only has the upper hand because of local stores...

(0) https://nypost.com/2018/04/16/amazon-warehouse-workers-pee-i...

(1) https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

int_19h(10000) 4 days ago [-]

You forgot another category: products that are outright illegal. For example, 'oil filters' that are basically silencers, or Glock full auto conversion parts.

https://smile.amazon.com/JINJULI-Tactical-Semi-Automatic-Cov... (google 'Glock FSSG' to see what this actually is)

https://smile.amazon.com/Filter-Aluminum-24003-Durable-Const... (note the reviews and questions on this one)

Now, to Amazon's credit, when you report these, they do take the ones that are unambiguously illegal down. The problem is that they get re-posted under a slightly different title within a few hours.

matthewfcarlson(10000) 4 days ago [-]

My issue with Amazon reviews is the ease that sellers are able to change the product and keep the reviews. I recently tried to buy a decent pair of USB C headphones. The top two recommended products both had 5 starts with hundreds of verified reviews... for a dvd copy of a classic movie. I've decided I just can't trust Amazon reviews anymore.

cortesoft(10000) 4 days ago [-]

Or the one product that has like ten different 'variations' which are actually totally different products, yet share reviews. A review on one of them means nothing for the other nine.

nathancahill(2662) 4 days ago [-]

Or grouping 20 distinct products under one item.

modzu(4071) 4 days ago [-]

i was looking at a speaker, a couple of the verified reviews said something like, 'it fit a little tight'.

ReviewMeta(10000) 4 days ago [-]

I also agree that it should be very easy for Amazon to pick up on. I can't believe it's happening. We call this 'Review Hijacking' and have a warning that appears at the top of our report if we detect it: https://reviewmeta.com/blog/amazon-review-hijacking/

simonh(4119) 4 days ago [-]

In an unregulated market, brand identity and reputation becomes a very strong differentiating factor. The strongest brand on Amazon is Amazon, so by allowing the generic market to rot they are allowing their own-brand basics products to establish themselves because consumers can assume that Amazon products won't have this fake review problem.

It does also work for other brands too of course. I was in the market for some Lightning cables recently and almost every single cheap cable was a switch-out with review for different products. The only brands I recognised were Amazon and Anker and both their stuff was legit (I bought from both).

gordon_freeman(2518) 4 days ago [-]

This always happens with various editions of books too especially for classic books. For example: I want to buy Meditations by Marcus Aurelius but there are so many editions of that book with different authors who translated these books but you'd see the same exact reviews on all of these editions. It is very hard for me in that case to choose the right book.

OrgNet(4010) 4 days ago [-]

just buy from somewhere else whenever it is cheaper

sonnyblarney(3337) 4 days ago [-]

This is yet again another, surely well-known 'leak' in how products are presented and displayed, since it can still happen, Amazon is tacitly complicit with the fraud.

Most of these issues could be mostly cleared up with some basic operational policies: purchase only reviews (or sideline the others), no 'changing the product', some degree of manufacturer verification etc..

But they chose not to do it.

Semaphor(10000) 4 days ago [-]

While Amazon Germany (afaik) does not yet have many of the problems of Amazon US (I often hear on HN about fake products and the fake reviews of TFA), what really annoys me is the completely different products as one. I was looking at a drill, it's variants included a jigsaw. The reviews were almost all for the jigsaw and at some point, Amazon even removed the notice which variant was reviewed. Utterly useless.

sanderjd(4017) 4 days ago [-]

I really think Amazon needs to fix this. I'm about two shitty products away from cancelling Prime. Their business model is 'good enough products shipped really quickly without needing to think about it much'. Recently, things I order aren't good enough. I find myself thinking about whether I should just drive to Target so that I can see the thing before I buy it. This seems like a problem for Amazon.

rlander(2776) 4 days ago [-]

My last employer sent out a company-wide email asking every employee (and their friends and family) to order their flagship product (IoT device) on Amazon, write a 5-star review and e-mail the marketing department for reimbursement and product return. The returned products would then be re-shipped to Amazon.

Needless to say, it was the shortest time I've ever spent at a job.

fmajid(10000) 4 days ago [-]

They are trying to fix the problem, but the number of scammers dwarfs the number of Amazon employees: https://blog.dshr.org/2019/04/what-is-amazon.html

That said, Amazon could buy Fakespot or one of its competitors with small change from Jeff Bezos' sofa, and they are clearly doing a better job than Amazon at rooting out fraudulent reviews.

Ironically, Amazon's enforcement was turned against them. Unscrupulous merchants are planting obviously fake five-star reviews on their competitors. Amazon then takes down the framed competitor. Genius!

luckylion(10000) 4 days ago [-]

> They are trying to fix the problem

doubt. Amazon has unlimited data on everything, they could fix that if they wanted to. They just have zero incentive to: the sales happen on Amazon's platform, a lot of them use FBA, so Amazon is getting paid.

apacheCamel(10000) 4 days ago [-]

Like everything else online, do your research. I assume we all know plenty of people who have taken online reviews at face value, I have done it myself. Like most other things on the internet, they can be faked. It is sad that people take advantage of good faith systems like this and do not feel the consequences.

Side note: Having the consumer group be named 'Which?' and to include the question mark in it makes it jolting to read in the middle of a sentence. I can't tell if I like the name or dislike it a lot.

acdha(3641) 4 days ago [-]

> Like everything else online, do your research.

There are many categories where this advice sounds a lot easier than it is. There's a whole industry creating fake review sites, farming things out to people for favorable social media reviews, etc. and it takes a non-trivial amount of time and skill to figure out which sources are reliable. As long as the internet runs on advertising there's no reason to expect this to get easier since companies like Google make considerable amounts of money from all of those ads.

adrian_mrd(1411) 4 days ago [-]

I recently ordered a small device from Amazon, it was poor, so I gave it a 1 star rating with a short, negative review.

About a week later, I received an e-mail from the seller (via Amazon's payments communication system) asking for me to delete my review in return for being refunded the total amount, and stating that I could keep the device.

Whilst I didn't take up their offer, I assume many others did which shows how Amazon is 'not flooded with one-star reviews'.

GordonS(760) 4 days ago [-]

I had exactly the same thing happen recently with a light bulb that wasn't nearly as bright as advertised. Seller said they'd refund me, let me keep the crappy bulb and send me a brighter and more expensive bulb for free.

I didn't take them up on this, as it wouldn't be fair for future customers; after all, I'd bought the crappy bulb because there were no bad reviews...

l8nite(10000) 4 days ago [-]

I've started using fakespot.com for every purchase from Amazon, has saved me from making a bad purchase more than once.

QuantumGood(3990) 4 days ago [-]

The Fakespot chrome plugin speeds things up, and checking the ratio of good to bad.

But nothing is more important than reading the 1- and 2- star reviews, comparing them to similar products, and using judgement.

sytelus(310) 4 days ago [-]

Even better is 'Made in XYZ' in description. If you are putting something in/on your body, this is absolutely important. But even if you are not, my experience is that products made in countries with trustworthy legal systems lasts 2X to 10X longer.

One thing that surprises me is that why online stores are not forced to display this explicitly. When you go to brick-and-mortar store, you can almost always look for this label but Amazon doesn't enforce it on their listing.

anitil(10000) 4 days ago [-]

Do they have a plugin? I feel like that would make life a lot easier - you don't have to go to a second domain, just look at the listing and the plugin puts a big 'Don't Buy' banner over it

drexlspivey(2995) 4 days ago [-]

I love this kind of market based solutions. I entered the product from the other comment and it gave it an F. https://www.fakespot.com/product/kuppet-macbook-pro-charger-...

externalreality(3974) 4 days ago [-]

It's not just Amazon its everything. Companies spend large quantities of money on media control, we all know this. I probably a hand full of media companies in the USA, China, and India that are selling a review control service.

BenoitEssiambre(3857) 4 days ago [-]

Yeah and I feel the bots or human assisted bots are getting more and more sophisticated. Even here on hacker news it feels like there is something less organic than before about posts and comments rankings.

It may be that I am getting old and that the culture has changed but I'm not sure...

jiveturkey(4073) 4 days ago [-]

unverified != fake!!

Most of my reviews are 'unverified', because I purchase elsewhere. But because amazon is a great aggregator of reviews, I want to pay it back by helping others.

Now of course, there are indeed many fake reviews, but this article does a terrible job of explaining the situation.

dRaBoQ(10000) 4 days ago [-]

The article mentions a case where 439 unverified reviews were all posted on the same day for one kind of not-known brand headphones.

sonnyblarney(3337) 4 days ago [-]

Not really. Unverified de-facto = fake.

Because unverified reviews are easy to make, then it follows manufacturers will just flood the system with fake one's. Which they do.

Unverified boils down to fake.

mmanfrin(3813) 4 days ago [-]

A couple months ago, I googled 'whiteboard amazon', clicked the first result, and was taken to a well structured amazon page for a whiteboard that had 5 stars. Looking at it a little closer, I noticed that of the 143 reviews, 143 were 5 star reviews. On top of it, every review followed a similar structure, was made by an account that had thousands of 5 star reviews, and was so beyond the pale obvious fraudulent that I felt the need to email [email protected]

Today I was looking at some vitamins, and I checked every single result for a certain supplement that was 4 or 5 stars and every single one of them ranked D or worse on FakeSpot.

How the fuck does Amazon not know how to deal with this? 100% of reviews coming in for a product on the same day? MAYBE THAT'S A SIGN, AMAZON.

Amazon reviews are worse than garbage now.

nkrisc(4108) 4 days ago [-]

I would not buy anything that is meant to be consumed on Amazon.

madsprite(10000) 4 days ago [-]

It gets even worse that true 5 stars and low ratings can get flagged for false review by Amazon's algo.

Last year there was a posting on hackernews how this person lost his Amazon account as it got flagged on its first review with a long history of purchases.

esalman(3322) 4 days ago [-]

These kind of review patterns are easy to spot. I used to read through recent review etc. These days when I find a product on Amazon that I feel like buying, I go to reviewmeta.com and check the review trend before deciding.

chrischen(1960) 4 days ago [-]

> How the fuck does Amazon not know how to deal with this? 100% of reviews coming in for a product on the same day? MAYBE THAT'S A SIGN, AMAZON.

Maybe that's bad for business if they did. People see a < 5 star review and may not purchase it, even if it is a lie.

_jal(10000) 4 days ago [-]

> How the fuck does Amazon not know how to deal with this?

How do you know they don't, and have via whatever mechanism decided that they make more money if they choose not to?

m3nu(3995) 4 days ago [-]

Surely their advanced machine learning would see such an obvious pattern?

sonnyblarney(3337) 4 days ago [-]

'and was so beyond the pale obvious fraudulent that I felt the need to email [email protected]'

Obviously Jeff is well aware, and could do a lot more about it, but doesn't, ergo he is complicit.

Fake Reviews are 100% supported by Amazon and Bezos.

It's basically full on fraud, right out in the open.

thinkloop(3474) 4 days ago [-]

> How the fuck does Amazon not know how to deal with this? 100% of reviews coming in for a product on the same day? MAYBE THAT'S A SIGN, AMAZON.

Then they would stagger the days. The issue is that human adversaries are formidable and you start approaching generalized AI to defeat them.

stebann(10000) 4 days ago [-]

Capitalism dynamics indicates that they DO KNOW that some reviews are fake.

PorterDuff(10000) 4 days ago [-]

I think that food supplements/vitamins are the worst. My one and only email to [email protected] was on that. It probably didn't do any good, but I felt better.

Luckily, they don't appear to be of any value anyway.

notacoward(4046) 4 days ago [-]

It's not just the fake reviews that bug me. I can use fakespot to weed through a few of those. The thing that has really made Amazon less usable for me in the last year or so is seeing the exact same product twenty times under different nonsense-word brands. Recalling a recent example, in about a minute I can find the exact same pair of water shoes sold as: gracosy, MAYZERO, LINGTOM, Wonesion, JointlyCreating, Centipede Demon, hiitave, and more. Another one is Belilent, Alibress, SUOKENI, Zhuanglin, Dreamcity, and so on. Same pictures, almost same descriptive text, with only minor cosmetic differences.

Are these different companies that happen to use the same supplier? It's possible, but it could also be one company creating multiple pseudo-brands to game the system. I could probably even find out, but I don't care. The same physical thing shipped for the same price from the same Chinese factory shouldn't show up twenty times. As long as search results are filled with crap, they're useless. It's the combination of fake reviews and this kind of flooding that makes me want to leave and never come back.

asdff(10000) 4 days ago [-]

The store is an absolute mess, even with major brands. The exact same pair of Nikes could be listed in a half dozen separate categories sold by two dozen different sellers at shipping speeds ranging from two hours to two months and prices ranging from $0.06 - $649.58.

You end up having to spend 10x the amount of time down the rabbit hole of different categories, vaguely different product titles, different sizes ('size 10', 'size TEN' '10(m)', etc.), and different names for the same color shoe, all to desperately find that low price/size/color combination that drew you in from the search results in the first place.

The store desperately needs moderation to tidy it up. I'm sure the devs are patting themselves on the back for all the extra engagement they are milking out of me, but frankly I'm using the site less and less to the point where I've cancelled my prime membership. The only thing keeping me is milking their free shipping and 3% back card, and only if local alternatives fail me.

sytelus(310) 4 days ago [-]

This is one of the most problematic. I think 3 out of 5 products I usually search for turns up massive levels of white-label crap. I suspect these are Alibaba based sellers who rebrand their white label fake products in Amazon literally 20 to 50 times under different cool sounding brand names. They have different costs and slightly different descriptions but otherwise no real way to differentiate them. These listings completely takes over search results and pushes out any genuine brands not savvy with SEO way out on 3rd or 4th page creating denial attack on customers. Now I have made habit of not falling for these white label crap and explicitly look out for 'made in XYZ' in description. I think lot of US based genuine brands have gone bankrupt because of these.

Amazon needs complete reboot of their search given this level of white label spam and their inability to effectively de-dup results.

LoSboccacc(4097) 4 days ago [-]

yeah fake reviews are an industry thing and it's basically not much different than advertisement and once you treat them like that they aren't such a grating issue... what is a huge problem to me instead is that their search is completely useless because everyone is writing everything on their pages, even if you look for a specific brand of items, even if you further filter by brand, nothing seem to work and a lot of junk fills up the result page.

that and commingled inventory.

currently I trust more aliexpress to deliver the product as pictured than amazon.

panic(121) 4 days ago [-]

'There's No Such Thing as a Free Watch' by Jenny Odell is a deep investigation of this phenomenon and a really good read: http://www.jennyodell.com/museumofcapitalism_freewatch.pdf.

fucking_tragedy(10000) 4 days ago [-]

> Are these different companies that happen to use the same supplier?

You can often find the exact product, same pictures and all, on Alibaba. Only difference is the branding, so it's obvious where sellers are sourcing their products.

It's one of the reasons I use AliExpress: it cuts out the middlemen and their markup.

warent(2650) 4 days ago [-]

What you're experiencing is called dropshipping, and it's becoming popular because amateurs are seeing it as a 'get rich quick scheme.' There's a product called Oberlo (https://www.oberlo.com/) where you can easily browse Chinese items for sale and add them to your own Shopify store. Then you can use a free plugin to integrate Shopify with Amazon. That's why you're seeing a million of the same item, because it's tons of people browsing the same products with this same setup.

fastball(4037) 4 days ago [-]

I actually love it when I see this on Amazon, as it tells me that I can just go on Alibaba/Aliexpress and order the same thing from there for a fraction of the price.

ginger123(4087) 4 days ago [-]

If you are buying electronics, buy it from Best Buy, Costco, Target, Walmart or another retailer with a physical store. Best Buy matches Amazon's prices and their merchandise is not fake.

sytelus(310) 4 days ago [-]

BestBuy in my experience makes it fairly hard. First you have to buy at their high price from the store. Then you must do customer support call and send them screenshot of Amazon listing within 14 days. I even tried that just to find that their website was down and other times they had office hours on different time zones. I wouldn't recommend price match for all but very determined customers. Their store employee refuses to match price on the spot even when shown Amazon website price.

dRaBoQ(10000) 4 days ago [-]

and it has US warranty.

When I had an issue with my MotoZ (which I got through same-day Prime delivery), Motorolla told me they won't cover it because the IMEI said its from India and can only be covered if I send it to India.

dontbenebby(3995) 4 days ago [-]

Can best buy do in store pickup?

(Ex: I order online and the item is waiting for me rather than hunting through shelves)

If so I may consider cancelling prime, between that and Target offering free shipping to Red Card holders.

russdill(10000) 4 days ago [-]

B&H

dymk(10000) 4 days ago [-]

Unfortunately, no physical retailers around me have random things like 3D printer filament, various specific fasteners, or a particular inexpensive picture frame. Amazon wins at sheer variety of inventory.

Also, if a particular popular brand/model is at a big box store, the Amazon listing is almost certainly the actual thing. Random off-brands might be fake, but you run the same risk at Walmart as you do with Amazon then.

massivecali(10000) 4 days ago [-]

I used to think the same way about Fry's back in the day, until it came put that they were reshrinkwrapping broken returns amd putting them back on the shelves. After seeing the recent article about fake Chinese iPhones being returned to Apple stores for exchanges, I wonder what wave of fraud is next on the horizon. All of the stores you named have open box discount items too.

will_pseudonym(10000) 4 days ago [-]

'ReviewMeta, a US-based website that analyses online reviews, said it was shocked at the scale of the unverified reviews, saying they were 'obvious and easy to prevent'.'

Unverified reviews may be easy to prevent by disabling unverified reviews, but then the scam just includes one extra step, having each account reviewing a product purchase the item on Amazon, then reimburse the account. Easy verified purchase.

You also would lose out on actual purchasers who bought it elsewhere than Amazon who would want to leave reviews.

The problem of bad reviews is definitely an unsolved problem, as even if you build in trust mechanisms (and Amazon surely does this already), the scammers will build networks of self-reinforcing bots. You could calibrate the system to heavily discount reviews of reviewers which have any 'scam' tags, but then the networks would just take longer to build, leaving legit reviews to build up trust, and then leaving false reviews after having built up that trust. These cat and mouse games will go on a long time.

The same issues happen across every open network where identity isn't verified. As much as I (and others historically) have benefited from pseudonymity, it's definitely being weaponized (Amazon, Facebook, Twitter, etc) to benefit certain entities (companies, governments, etc), and the ultimate result is a loss of trust in the networks at large. I don't know what the solution will be, but the solution will be incredibly valuable.

I've thought about some sort of hybrid account handle system where you have one account that has two handles associated with it -- one, your real-world name, and another your pseudonym, but the single account is verified by real-world ID. As a contributor to the network, you could post as your pseudonym or your real world name, depending on the contribution that you are comfortable making, and as a consumer of the network, you could choose at any time to only view content of the network posted by real-world names, or pseudonyms. This would limit the number of possible astroturfed accounts, but still allow for sensitive discussion of heated matters.

Just throwing the idea out there because the need for some kind of solution to this destruction in network quality is something that affects us all.

8note(10000) 4 days ago [-]

the problem you'll run into is making sure those 'real IDs' are actually real IDs

steve19(3599) 4 days ago [-]

How about simply filtering amazon reviews that are only left by people with a wide variety of expenditure and who spend over $500/year?

russdill(10000) 4 days ago [-]

Even if you add the ability to filter, it won't help unless it's the default. You'll still have a huge number of buyers choosing the product due to fake reviews and driving up the sales numbers. Since they aren't the smartest shoppers, they're also likely to leave their own review and be influenced by the other 5 star reviews.

blibble(4005) 4 days ago [-]

I no longer buy from Amazon: batteries, chargers, flash drives/SD cards, any sort of food (or container for food)

essentially only stuff that's completely obvious if it's fake, or if it's too expensive/niche to bother counterfeiting

colejohnson66(3934) 4 days ago [-]

For batteries, AmazonBasics is pretty reliable

AmVess(10000) 4 days ago [-]

I've read articles of people getting fake HAND SOAP from Amazon. You know, the less than $1/bar soap.

Fake laundry detergent is also a big one.

The fake bar soap has led me to believe that people will make counterfeits of anything they can.

weiming(3998) 4 days ago [-]

Have you had actual experiences with fake food? Some reviews for bottles of Evian or Fiji water sounded alarming when I wanted to order from Amazon (labels attached incorrectly, bottle cap unusual tint, things like that), so we just stuck with local supermarket delivery.

HarryHirsch(3061) 4 days ago [-]

batteries, chargers, flash drives/SD cards, any sort of food

Also printer toner. The asking price is close to the price quoted on the manufacturer's website, but you never know if you'll get an original, a half-empty or a remanufactured cartridge, and the reviews reflect that. The cherry on top is that if you try to sell a spare cartridge yourself Amazon won't let you sell toner, you need to apply for permission!

tasty_freeze(10000) 4 days ago [-]

I wish I kept a link, but there was looking to buy a book on music theory or something similar. One of the books had 13 or so reviews, all glowing. Many were of the form 'Exactly what I was looking for!' or 'Just perfect!' with nothing more.

Two of them had the same surprising word that made no sense in the context of the sentence: both reviews used the word 'goal.' Then it hit me: either the directions telling them what to review or in their own attempt to translate to English, the auto-translation picked the wrong synonym, choosing 'goal' instead of 'score.'

duskwuff(10000) 4 days ago [-]

Or the reviews were being run through an overeager text spinner.

tapland(10000) 4 days ago [-]

I sometimes miss my browser translating reviews on amazon.de and get very, very confused.

wallace_f(1580) 4 days ago [-]

Yea, and spurious reviews are egregious on other markets such as booking.com, etc.

Booking sites dont just use fake reviews, but also hide or even delete bad reviews. I've personally seen this because I travel a lot.

As much as I dislike Google, they act as a neutral third-party for hosting hotel and restaurant reviews. Amazon howrver wants good reviews on products to make sales.

Pxtl(10000) 4 days ago [-]

I would pay substantially more than Amazon prices for a store that actually curated their inventory to offer quality stuff instead of a firehose of algorithmically-reviewed trash.

I shop at Amazon only for the selection, not the experience. Not even for the price.

andrewxhill(3449) 4 days ago [-]

If you can handle a fraction of the inventory, you can use a trusted reviewer like https://thewirecutter.com/

sundayedition(10000) 4 days ago [-]

One product (power bricks for MacBooks) had over 1000 5 stars, then the 1 star reviews came in where people actually had their power adapters catch on fire. Amazon, or maybe the seller, shut the review/product down. Now, it's back:

https://www.amazon.com/KUPPET-MacBook-17inch-Compatible-MacB...

Lots of verified purchases. All the reviews are within the same date range. It doesn't take machine learning or advanced AI to catch this; a simple SQL statement should be enough to flag these. But still, they persist.

harry8(4097) 4 days ago [-]

Ergo, Amazon cannot do Stats/ML/AI/call it what you will. The claims that they have an AI advantage don't pass the sniff test.

technofiend(10000) 4 days ago [-]

My cynical nature assumes some Amazon MBA has done the math and concluded that the gain in revenue from Amazon-only brands will exceed that lost from people abandoning the rest. This is based on the assumption that Amazon's Fulfilled By Amazon cobranding program does not allow third party participation for Amazon's in-house brands like Amazon Basic.

In light of that admittedly negative view what's Amazon's motivation to limit fake reviews since it doesn't damage their own brands, albeit at the cost of damaging their brand as a whole.

tracker1(4099) 4 days ago [-]

At this point, I trust the Amazon brands less than the knockoffs. Half of the amazon labelled products I've used didn't work well or very long.

bkraz(3948) 4 days ago [-]

I once wrote an Amazon review in which I said that the other reviews followed a pattern of manipulation. Amazon deleted my review, and sent me a warning letter : https://twitter.com/BenKrasnow/status/1106385729435787264?s=...

delecti(10000) 4 days ago [-]

I'm with Amazon on this one. Reviews are for the product, not to side-talk other reviews. Report the other reviews if there's a problem. Of course they probably aren't likely to do anything even then, but cross chatter between reviews is not a trend they'd want to allow either.

twblalock(10000) 4 days ago [-]

I use Amazon a lot but I won't buy expensive brand-name stuff there anymore. Case in point: I am going to buy a Starrett combination square and Amazon has a nice price with free shipping, but I'd bet there is a 50/50 chance I get a counterfeit.

I'd also put your chances of getting a fake $9 Casio watch on Amazon at 50/50.

As a software engineer I kinda sympathize with Amazon -- no matter what system you come up with, people are going to game it. It's a very complex moving target and you will never be able to eradicate all of the scammers. At the same time, I think they could do a lot better than they do today, and I wonder if they actually try.

sytelus(310) 4 days ago [-]

Weeding out fake products is not a software or machine learning issue. It's a process issue and much simpler to solve. You require all your merchants to provide their legally traceable identity. Your agreement should require merchants to put payments on hold when fake product complaints are received with arbitration fees. All big brick-and-mortar shops do this. It would be very hard to introduce fake product in Fred Meyers or Costco by any merchant without financial and potentially legal consequences. Amazon just needs electronic version of that process.

jerkstate(10000) 4 days ago [-]

The only reviews worth reading on any site are 1 and 2-star reviews. I honestly don't care if someone is happy with a product, I want to know what the problems might happen or what situations it doesn't work for.

I recently paid for a Consumer Reports digital subscription to get reviews for appliances and it's been worth every penny of the very reasonable $35 annual subscription.

int_19h(10000) 4 days ago [-]

I disagree. Most 5-star reviews are garbage, but sometimes you find one that goes into great detail about real-world device specs from their own measurements, and even disassemble them and note various upsides and downsides of the design.

And conversely, many 1- and 2-star reviews are also garbage - the most common case is when the reviewer complains about item not being delivered or delivered late, or not what they wanted due to some misunderstanding.

What's really needed IMO is some kind of reputation for reviewers, that is easy to track.

psadri(4094) 4 days ago [-]

My theory is that amazon's new business model is increasingly going to be based on advertising.

Reliable reviews are incompatible with advertising revenue (why else would you have to advertise heavily if your products are really the best).

The same observation may explain the doscountinuation of Amazon Button. Button would reorder the same product/brand over and over again - not compatible with advertising by competitors.

luckylion(10000) 4 days ago [-]

Amazon is making money on all sides: ads, 'amazon's choice', and when you buy something that will break soon, you buy twice, and Amazon gets their commission twice. So far, I don't see any incentive for them to change anything.

52-6F-62(3509) 4 days ago [-]

Is it possible there's a gulf between Amazon.com and Amazon.ca when it comes to these issues?

Maybe it's just in what I shop for but I haven't received a single counterfeit item, or come across fraudulent listings.

I've seen my fair share of imitation (and probably trademark-infringing) products and cheap crap, but I try and avoid that stuff.

So my experience has largely been positive—but there's no shortage of horror stories. That seems the norm around this board.

So I wonder is it just worse on the American side? (for reasons of volume or targeted marketing or whatever)

colejohnson66(3934) 4 days ago [-]

I'm in America and I too am confused at all the negative comments about counterfeits. I've personally ordered hundreds of items from Amazon, but never a counterfeit

juskrey(4096) 4 days ago [-]

I am always starting from bad reviews. Ironically, very often they are the source of the information which makes me buy a product immediately.

E.g. if the book author is 'arrogant', or most product problems are coming from hysterical idiots with none of them due to the manufacturer.

ikeboy(427) 4 days ago [-]

Yes, I've heard this recommended as a strategy for sellers - leave yourself negative reviews that are clearly not issues with the product.

rhizome(4081) 4 days ago [-]

Amazon lets people sell $5 Ikea items for $45. Fuck them.

ryacko(10000) 4 days ago [-]

https://www.usatoday.com/story/news/nation-now/2017/10/24/or...

Amazon lets drug dealers sell thousands of dollars of weed through their storefront.

I wonder if Amazon allowed drug sniffing dogs to search the warehouse, if they didn't, seems like an endorsement of some kind.

sadlion(10000) 4 days ago [-]

I buy my supplements like vitamins, protein bars and powder from Amazon. The stock mingling and counterfeits is worrying me now. Should I switch to local shops or am i being too paranoid? For non edibles, with all the fake review sites out there, are sites like the wirecutter and consumer report still trust worthy?

masonic(2371) 4 days ago [-]

I bought Schiff's sleep aid from Amazon, and when I compared the bottle with a Costco-purchased one, there were clear differences and an overlaid UPC label.

80mph(3759) 4 days ago [-]

I've switched to Vitacost for my supplements, and am very satisfied. I'm in CA, and they have a warehouse in NV, so I often get my stuff within 48 hours. They have a decent selection, and frequently offer promo codes. Currently, some promo codes can even be combined, although that might not last much longer.

I should add, I don't rely on Vitacost reviews either. I use ConsumerLab to narrow down my choices.





Historical Discussions: South Korea now recycles 95% of its food waste (April 13, 2019: 536 points)

(536) South Korea now recycles 95% of its food waste

536 points 7 days ago by okket in 14th position

www.weforum.org | Estimated reading time – 5 minutes | comments | anchor

The world wastes more than 1.3 billion tonnes of food each year. The planet's 1 billion hungry people could be fed on less than a quarter of the food wasted in the US and Europe.

Image: UN FAO

In a recent report, the World Economic Forum identified cutting food waste by up to 20 million tonnes as one of 12 measures that could help transform global food systems by 2030.

Now South Korea is taking a lead, recycling 95% of its food waste.

It wasn't always this way in the country. The mouth-watering array of side dishes that accompany a traditional South Korean meal - called banchan - are often left unfinished, contributing to one of the world's highest rates of food wastage. South Koreans each generate more than 130 kg of food waste each year.

By comparison, per capita food waste in Europe and North America is 95 to 115 kg a year, according to the Food and Agricultural Organization of the United Nations. But the South Korean government has taken radical action to ensure that the mountain of wasted food is recycled.

As far back as 2005, dumping food in landfill was banned, and in 2013 the government introduced compulsory food waste recycling using special biodegradable bags. An average four-person family pays $6 a month for the bags, a fee that helps encourage home composting.

The bag charges also meet 60% of the cost of running the scheme, which has increased the amount of food waste recycled from 2% in 1995 to 95% today. The government has approved the use of recycled food waste as fertilizer, although some becomes animal feed.

High-tech food waste recycling machines in Seoul.

Image: Wikimedia

Technology has played a leading part in the success of the scheme. In the country's capital, Seoul, 6,000 automated bins equipped with scales and Radio Frequency Identification (RFID) weigh food waste as it is deposited and charge residents using an ID card. The pay-as-you-recycle machines have reduced food waste in the city by 47,000 tonnes in six years, according to city officials.

Residents are urged to reduce the weight of the waste they deposit by removing moisture first. Not only does this cut the charges they pay - food waste is around 80% moisture - but it also saved the city $8.4 million in collection charges over the same period.

Waste collected using the biodegradable bag scheme is squeezed at the processing plant to remove moisture, which is used to create biogas and bio oil. Dry waste is turned into fertiliser that is, in turn, helping to drive the country's burgeoning urban farm movement.

The number of urban farms or community gardens in Seoul has increased sixfold in the past seven years. They now total 170 hectares - roughly the size of 240 football fields. Most are sandwiched between apartment blocks or on top of schools and municipal buildings. One is even located in the basement of an apartment block. It is used to grow mushrooms.

The city government provides between 80% and 100% of the start-up costs. As well as providing food, proponents of the scheme say urban farms bring people together as a community in areas where residents are often isolated from one another. The city authorities are planning to install food waste composters to support urban farms.

Banchan dishes are often left unfinished.

Image: Wikimedia

Which brings us back to banchan. In the long term, some people argue South Koreans will need to change their eating habits if they are really going to make a dent in their food waste.

Kim Mi-hwa, chair of the Korea Zero Waste Movement Network, told Huffington Post: "There's a limit to how much food waste fertilizer can actually be used. This means there has to be a change in our dining habits, such as shifting to a one-plate culinary culture like other countries, or at least reducing the amount of banchan that we lay out."




All Comments: [-] | anchor

the_economist(3610) 7 days ago [-]

Before Haber–Bosch, everyone recycled 100% of their food waste.

aitchnyu(10000) 5 days ago [-]

And countries went batshit crazy - Saltpetre war with Chile vs Bolivia and Peru were fought over caves with bat guano which was the most valuable fertiliser of the day.

jdietrich(4107) 7 days ago [-]

>The bag charges also meet 60% of the cost of running the scheme

Recycling metals is substantially profitable, because scrap metal is actually worth something. Making new metal from old metal requires far less energy than making new metals from raw ore, even factoring in the costs of transportation and processing.

Does collecting and composting food waste actually result in a net reduction of our use of finite resources, or is it just a sop to make us feel better about throwing away food? Is this really recycling, or is it a waste of resources at one step removed?

AngryData(10000) 7 days ago [-]

60% of the world's crop yield is the direct result of fossil fuel derived fertilizers. Recycling food is recycling fossil fuels, otherwise it just ends up rotting into the environment and polluting it just the same as burning the fuels except you aren't doing anything with it if it just rots in a landfill.

Haga(10000) 7 days ago [-]

One can extract the energy in food?

tgb(10000) 7 days ago [-]

It certainly can be a reduction if you can compost at home. The result is useful. The effort is very minimal if you don't need it to compost quickly. It doesn't need to be transported to a landfill. And my understanding is that it breaks down into carbon dioxide instead of methane if you compost instead of landfill, which is a win for climate change.

cjensen(3915) 7 days ago [-]

Food used on farms decomposes directly into the atmosphere, including methane. Food sent to landfill decomposes poorly with an increased methane output, but some of the methane can be captured to generate electricity.

I'm not qualified at all to analyze this properly. The former method makes use of food to displace use of fossil fuels in farming. The latter displaces use of fossil fuels for electricity generation.

Is there research which demonstrates that one method is better than the other?

oska(577) 6 days ago [-]

I recently learnt of a food waste recycling method that avoids decomposition and release of methane. It is called bokashi and it uses homolactic fermentation to break down the food waste. Quoting the wikipedia article [1]:

> Homolactic fermentation breaks no carbon bonds and emits no gas; its overall equation is C6H12O6 (carbohydrate) → 2 CH3CHOHCOOH (lactic acid). It is a mildly endothermic reaction, emitting no energy; the fermentation vessel remains at ambient temperature.

Interestingly, this method was historically developed in Korea, as is also detailed in the wikipedia article.

[1] https://en.wikipedia.org/wiki/Bokashi_(horticulture)

punnerud(1357) 7 days ago [-]

Norway: 50% of food, 77% of energy in waste, and 97% of toxic waste is recycled. https://translate.googleusercontent.com/translate_c?depth=1&...

We are also rebuilding some of our plants to sort waste with machine learning. The initial tests are really promising. Several of them are already sorting out most plastic using spectrometer-sensors, so we don't need to do it manually. In combination with the machine learning the tests get it up to 99.9% or higher. https://translate.googleusercontent.com/translate_c?depth=1&...

EuroShill(10000) 6 days ago [-]

OK but what does this have to do with the original article other than the usual 'Europe does it better'. That's right, absolutely nothing.

airstrike(3040) 7 days ago [-]

Once upon a time I came across AMP Robotics[0], a company that also uses ML and robotics for sorting waste. Pretty nifty project, in case you're curious. You can see the robots in action on their website[1]

__________

0. https://www.amprobotics.com/

1. https://www.amprobotics.com/amp-cortex

thatwasunusual(10000) 7 days ago [-]

Recycling doesn't help much as long as Norwegians throw away 40-50 kg of food per person each year. :(

caiob(10000) 6 days ago [-]

How easy is it to do 50% of such (very) small population?

benj111(4035) 7 days ago [-]

'per capita food waste in Europe and North America is 95kg to 105kg a year'

That's a massive amount, and that is just waste in the household, not waste in the farm or factory.

I can understand some wastage of salad and berries etc, but they're all lighter stuff. How can the average westerner waste 2kg food a week???

newnewpdro(4071) 7 days ago [-]

I've lived with a variety of random people in shared circumstances, and it seems quite common for people to pack a refrigerator with perishables either in the form of leftovers or new groceries then forget it exists while continuing to eat the most convenient stuff at restaurants/the office.

Refrigeration in general encourages the behavior, without delivering on convenience since you generally have to reheat or partially recook the food only to have something inferior to the more convenient, freshly made by someone else option.

People I've known feel accomplished just by asking to take leftovers home and placing them in the refrigerator with the feigned intention of eating them, knowing full well they will spoil and go to waste. It's an absurdity; they add packaging waste (often plastic at least in the bag) to their charade, mostly because they don't want to appear wasteful at the location they didn't finish their meal, and wish to dispose of it in the privacy of their home after spending energy on refrigerating it for a week.

pkaye(10000) 7 days ago [-]

This probably includes all stages of the food value chain. A good chunk will be at the farmer but in general their scale is big enough that they can redirect to animal feed. Or a it could be at the grocer who bought too much carrots and they are starting to wilt. But at this stage things are mixed up too much so more likely to be redirected for compost. Or it could be at the restaurant which gives you a big plate of food but you are full and don't take the rest home.

http://www.fao.org/save-food/resources/keyfindings/en/

SllX(10000) 7 days ago [-]

It took me a long time when I first started living alone and taking care of all my meals to figure out how to not waste my food, mostly through trial, error, and taking note of every single useful bit of advice I came across online or in meatspace.

I still don't think I'm particularly efficient, just that I waste a lot less than I used to, and if I could fit a freezer chest in my apartment, I know a few ways that I could waste even less. Mainly what I do now is to shop at a local grocery just about every day, I'm blessed with an abundance of those and good options, and try not to buy more than I will cook in a day or two. This has become easier since switching to an almost entirely meat, egg and vegetable diet so I don't have dairy products, soy products, bread, pasta, cereals, noodles, beans, or anything besides my spices, dried herbs, salts, oils and vinegars sitting around. Despite that, I still end up throwing away more than I would like every few weeks, but probably not 2kg a week.

Had I grown up in a household with people that knew how to manage a household, it would have been a lot less trial and error, but with our over reliance on packaged foods instead of just what you can find in the produce section and at the butcher, probably a lot of people like me are growing up not learning how to manage their pantries and fridges.

That's without even factoring in restaurants. It isn't uncommon for people to not finish their food and leave it rather than taking the rest with them, and more so the more people eat out. Rather than finishing their leftovers at home, if they even took it home, people will often just go out to eat again the next day.

darkpuma(10000) 7 days ago [-]

If anything, Americans should be throwing out even more food. With the astronomical obesity rates, I don't think we should be encouraging people to finish what's on their plate.

If that seems wasteful to you, then I assert you're not looking at the big picture. Habitual overeating causes even worse portion control in the future, which causes even more food waste in the long run.

crazygringo(3737) 7 days ago [-]

Yes that sounds insane. That's 0.6 lbs food per day, which is the same as throwing away by weight every day (using McD's as an internationally recognizable reference):

- 1.5 McDonald's quarter-pounder sandwiches [1]

- or 2.5 portions of McDonald's medium fries [2]

That seems like a lot of food to waste. Like that just doesn't pass the smell test...

Maybe they're counting liquids like milk too, that's gone past the expiration date? Otherwise I just can't wrap my head around this.

[1] https://en.wikipedia.org/wiki/Quarter_Pounder#Product_descri...

[2] https://www.mcdonalds.com/gb/en-gb/help/faq/19041-how-much-d...

BurningFrog(10000) 7 days ago [-]

Keep in mind that a lot of that 'waste' happens before the food reaches us consumers.

briandear(1933) 7 days ago [-]

Do you eat onion peels? How about banana peel? Do you eat the bones of meat? 2kg isn't that much.

sfotm(10000) 7 days ago [-]

I don't think it's in the purely personal sense. Imagine how much unused food must be thrown out at restaurants and delicatessens. Buffet and prepared foods probably don't roll over into the next day.

benj111(4035) 7 days ago [-]

I note that most replies seem to point to the bad habits of others.

This is an average. Everyone is wasting this on average. So presumably there's either a stigma against wasting food, which makes it all the more crazy, and/or people that waste food aren't interested in this kind of article. That would suggest they waste even more than average.

altcognito(10000) 7 days ago [-]

Food has a high % of water content and is therefore very heavy.

remote_phone(4015) 7 days ago [-]

It's very easy. At least in the US, portion sizes are too big. If you go to a restaurant it's profitable for them to increase the size of a meal by 50% and charge 25% more, which looks like a cost savings and value. But then the food ends up being wasted.

linuxftw(10000) 7 days ago [-]

Many people don't eat leftovers. In some cases it's cheaper to buy 5 lbs of something and throw out 2 pounds than to pay for 3 1-lb packages.

Have you ever been to a restaurant? Most people throw out over 25% of the food on their plates.

This is all a symptom of a distorted market place. Food's too cheap thanks to subsidized farming.

alpha_squared(4116) 7 days ago [-]

Between throwing out food that's gone bad and leaving unfinished plates at restaurants, I can absolutely believe that. There are some who strictly abide by the packaging expiration date and many people don't package the remainder of their restaurant dish to take home.

ip26(10000) 7 days ago [-]

I always want to know where that's measured. Presumably it includes food that went bad in the fridge, and food that was thrown away from the plate. But does it also include apple cores, animal bones, coffee grounds, onion peels, citrus rinds, banana peels, and so on & so forth? In a perfect world we'd make more soup stock, zest, and marmalade but I think compost is a pretty reasonable path for carrot tops and potato eyes.

garfieldnate(10000) 6 days ago [-]

Every time I ate pork there they always bragged about how it had been fed garbage T_T When I researched it later, it turned out that in the US we stopped feeding leftovers to pigs a long time ago due to health concerns. Laws were passed stipulating that garbage be boiled before being fed to pigs, but that made it too expensive so everyone stopped doing it.

hw(2630) 6 days ago [-]

Here in the SF Bay Area, some cities have started collecting food waste to be treated and fed to animals. There's been some minor backlash, due to the uncertainties around what goes into the food waste bin and how that gets treated and sorted and eventually processed into animal feed. I'm not aware of the law you mentioned, but it's being done.

maerF0x0(4011) 7 days ago [-]

> There's a limit to how much food waste fertilizer can actually be used

once upon a time we fed food waste to animals to 'recycle' the caloric content. Think table scraps(but not meat) going to pigs or chickens.

Would be interesting if we close the loop on this household content

Edit: Added note that I meant 'vegan' scraps.

benj111(4035) 7 days ago [-]

I believe in China it was common to site toilets over pig stys. Some loop closing isn't a very good idea.

https://en.m.wikipedia.org/wiki/Pig_toilet

nradov(1030) 7 days ago [-]

Yes my uncle used to feed almost all food waste to his pigs. Then we ate the pigs. Unfortunately suburban Silicon Valley residents don't appreciate backyard pigs.

aitchnyu(10000) 5 days ago [-]

Is feeding meat to chickens and pigs a bad idea? I get that brain matter could cause prion diseases in cows.

keypress(10000) 7 days ago [-]

Stopped in the UK due to BSE, and not being able to audit ingredients.

I wonder if cockcroach protein powder would bypass brain wasting disease and other nasties.

RickJWagner(3967) 7 days ago [-]

That's awesome!

Food waste seems so very easily preventable. Take what you want, but eat what you take. Composting what's left over is an awesome idea.

Gene_Parmesan(10000) 7 days ago [-]

Food waste is a bigger issue than people putting too much on their plates. One of the highest sources of food waste is in the 'manufacturing'/distribution phases -- malformed, misshapen, and otherwise 'unsalable' produce is either left unpicked in the fields or rejected by groceries. More waste comes from rejections due to food safety regulations, overproduction, logistical errors resulting in spoilage, and the simple fact of groceries needing to display full bins/shelves of food at all times, even for low-volume products which they can never hope to sell in time.

It's like when individuals are made to feel bad over taking a five minute shower instead of a three minute shower, while Nestle is extracting millions of gallons of water without paying a cent for it. Individuals should absolutely do everything they can to reduce their own food waste, and composting is better than nothing, but as a global society we should really be focusing our efforts on the largest sources of waste first.

nightski(10000) 7 days ago [-]

Is it no longer waste if it sits on your gut?

avip(4078) 7 days ago [-]

More than 50% of food (quantitive) drops in the supply chain, before arrival on any plate.

OldHand2018(10000) 7 days ago [-]

It's important to bring up the Food Recovery Hierarchy when talking about food waste. Composting is not the least desirable outcome, but it is close. We really need to focus on the higher priority reduction methods:

https://www.epa.gov/sustainable-management-food/food-recover...

newnewpdro(4071) 7 days ago [-]

This implies composting food waste is only slightly better than landfill, which is obviously absurd.

Illustrating that the next best existing option after composting is landfill, says nothing of the delta separating the two on the efficacy axis. It only speaks to a lack of altnernatives separating composting and landfill.

I'd argue that burying your food waste in any land that food will potentially be grown in is infinitely superior to sticking it in a landfill where food will never be deliberately grown.

From my perspective, living on a rural property with an outhouse where all food waste and human waste is buried and eventually grown food from your comment strikes me as ridiculous. I add obvious value to my land burying this waste, it completely escapes reuse, effectively exiting the system, when I put it in the dumpster.

ip26(10000) 7 days ago [-]

This seems like a ridiculous chart. First of all, no homeless people or animals want to eat leaf litter, moldy vegetables, or coffee grounds. Second of all, the only inputs to my garden are water, sunlight, and compost, so as far as I'm concerned composting garden scraps is net zero at worst.

peteradio(10000) 7 days ago [-]

My neighbors probably hate me but I compost all my food in the backyard. Last year was the first year we did it overwinter when the compost isn't even active and it also was the first year we didn't have mice in the house.

luhn(3736) 7 days ago [-]

I know you're probably joking, but for the sake of anybody reading this who may be consider composting: A healthy compost doesn't smell bad (I honestly kind of like the smell) and doesn't smell strongly either, so neighbors won't mind or even know it's there. I compost in my small patio; there's table and chairs just a couple feet from the composter and it hasn't been a problem.





Historical Discussions: LeBron James school that was considered an experiment is showing promise (April 15, 2019: 528 points)

(528) LeBron James school that was considered an experiment is showing promise

528 points 5 days ago by throwaway5752 in 3350th position

www.nytimes.com | Estimated reading time – 11 minutes | comments | anchor

The academic results are early, and at 240, the sample size of students is small, but the inaugural classes of third and fourth graders at I Promise posted extraordinary results in their first set of district assessments. Ninety percent met or exceeded individual growth goals in reading and math, outpacing their peers across the district.

"These kids are doing an unbelievable job, better than we all expected," Mr. James said in a telephone interview hours before a game in Los Angeles for the Lakers. "When we first started, people knew I was opening a school for kids. Now people are going to really understand the lack of education they had before they came to our school. People are going to finally understand what goes on behind our doors."

Unlike other schools connected to celebrities, I Promise is not a charter school run by a private operator but a public school operated by the district. Its population is 60 percent black, 15 percent English-language learners and 29 percent special education students. Three-quarters of its families meet the low-income threshold to receive help from the Ohio Department of Job and Family Services.

The school's $2 million budget is funded by the district, roughly the same amount per pupil that it spends in other schools. But Mr. James's foundation has provided about $600,000 in financial support for additional teaching staff to help reduce class sizes, and an additional hour of after-school programming and tutors.

The school is unusual in the resources and attention it devotes to parents, which educators consider a key to its success. Mr. James's foundation covers the cost of all expenses in the school's family resource center, which provides parents with G.E.D. preparation, work advice, health and legal services, and even a quarterly barbershop.




All Comments: [-] | anchor

nopinsight(802) 4 days ago [-]

Two major and quite distinctive improvements of the school I noticed from the article:

* Improving parental involvement and the parents' attention to their own education

* Utilizing the hidden power of role models: LeBron James and each kid's parents

Students spend significantly more time at home than at school. Parents interact with each kid one-on-one or in a small group. Thus, they can have much more influence than teachers on the kid's attitude, motivation, and habits regarding education.

If we look at a broader picture, most countries that do well in PISA, an international assessment of academic skills for school students, strongly value education at every level of society starting from parents and family. This includes Vietnam, a relatively poor country which, at rank 22, does better than quite a few Western European ones.

The US, at rank 31, should study this school and expand on good lessons learned from the experiment.

[1] PISA results map http://factsmaps.com/pisa-worldwide-ranking-average-score-of...

nopinsight(802) 4 days ago [-]

Vietnam's GDP PPP per capita in 2017 is about $6500; the US' is about $60000.

This implies that the US as a nation should have resources to support better education if they are deployed well, despite some discount from Baumol's cost disease.

danso(4) 5 days ago [-]

Related HN thread from 8 months ago (602 upvotes/463 comments): https://news.ycombinator.com/item?id=17661995

throwaway5752(3350) 5 days ago [-]

Sure, but it's nice to see the premise borne out by the early data:

The students' scores reflect their performance on the Measures of Academic Progress assessment, a nationally recognized test administered by NWEA, an evaluation association. In reading, where both classes had scored in the lowest, or first, percentile, third graders moved to the ninth percentile, and fourth graders to the 16th. In math, third graders jumped from the lowest percentile to the 18th, while fourth graders moved from the second percentile to the 30th.

The 90 percent of I Promise students who met their goals exceeded the 70 percent of students districtwide, and scored in the 99th growth percentile of the evaluation association's school norms, which the district said showed that students' test scores increased at a higher rate than 99 out of 100 schools nationally.

sudhirj(3625) 4 days ago [-]

They're pretty public about their 'secret sauce', which is to offer a pretty neat ladder up Maslow's pyramid, especially for the parents.

Any family worried about food / clothes can come into the pantry and take whatever they need. Physiological needs, check. Barbershop available. That's really interesting.

Safety needs: see above, with heavy emphasis on dealing with conflict situations. Celebrate coming to school, make sure it's always a safe place, extra hours and days to keep them off the streets.

Belonging and love: everyone in the school are the 'chosen ones', they have a tribe, the teachers are on their side, the parents are involved and accountable.

If you handle the first three levels for a person, hitting self esteem, accomplishment (at a personal level, need not be state's best or world's best) can come much easier, even with average quality of teaching. They're also making the parents baseline role models (they clean, clothed, putting food on the table and looking into their own self improvement) and plastering the environment with a topline role model LeBron James (if he can do it you could at least try as hard as he did).

ethbro(3767) 4 days ago [-]

My mother taught at Title 1 schools for years. She said the biggest gap was always family support.

Teachers don't have enough time to make up for a missing home environment, and (said in seriousness) it's a misuse of their time to play social worker (because there's no one else / no funding for anyone else).

We do a lot of dumb things in American public education, but one of the worst is misdeploying resources we do have and focusing on symptoms instead of root causes.

There was a school district (Kansas? Nebraska? Maybe?) that got stellar results just from co-locating county social services for parents at the school (unemployment offices, food stamp distribution, clinics, etc).

If a community needs help, don't send only teachers to fix it.

azernik(3745) 4 days ago [-]

The inclusion of a barbershop is very much an African-American cultural thing - these are traditional male social gathering places in that society. So this is probably not just providing a haircut, but also some things a bit higher up the Maslow hierarchy like that belonging you mentioned.

SpaceManNabs(10000) 4 days ago [-]

Of course it was going to be a success. Most other approaches that approached intergenerational poverty via the three pronged approach of better child care, housing, and economic opportunity has work to great success.

The school doesn't try to solve all 3 at once, but the approach to child care makes the other much easier to manage for parents.

fillskills(3088) 4 days ago [-]

Wonder if such a 3 pronged approach can be applied to integration of immigrants.

azernik(3745) 4 days ago [-]

Also includes some direct economic benefits for parents - e.g. the food/clothes pantry mentioned, where parents can have basic needs taken care of without payment.

externalreality(3974) 4 days ago [-]

This a good thing! I am 100% in support of James' endeavor here. However, students doing better when put in a better environment should not be surprising. I was a lower income student who was shipped across town to a majority white school as a anti-segregation program that was brokered with a federal oversight committee. The African Americans in that school were treated quite bad. My first day at school I was labeled '<my first name> Brown' by the kindergarten because there was a student who already had my first name in the class. My last name isn't 'Brown' that's my skin color. That was my first minute in school and I'm only 36 years old not 66 years old.

istjohn(10000) 4 days ago [-]

If I can ask, how do you feel about the program that bussed you across town, overall? Do you feel you benefited on net despite the exposure to the racism, or would you prefer to have stayed in your neighborhood school if you could do it all again?

tracker1(4099) 4 days ago [-]

I'm just happy to see a positive article.. I see so much political or tech news, I don't get much on the positive side, generally speaking.

singhrac(3888) 4 days ago [-]

I read this regularly to get that :) https://www.nytimes.com/spotlight/the-week-in-good-news

stevenwoo(3629) 4 days ago [-]

Wyatt Cenac's show Problem Areas is talking about school issues this season and the second episode focused on solutions focusing on different areas including continuing education of kids kicked out of regular school for misbehavior, the principal put her desk in the main hallway and changed the school police (New York school system has about the same number of education 'cops' than Houston has actual cops) to a more humane, almost UK/Peel approach, and also education of prisoners for rehabilitation, for the title of the show it was pretty upbeat.

RcouF1uZ4gsC(3922) 5 days ago [-]

This illustrates that parents are the key component in childhood education. A worse than average teacher who teaches kids with involved parents will outperform the best teacher with uninvolved parents.

In addition, LeBron James has a huge amount of credibility built up with the students. I would guess that many of the students feel some sort of connection to him, and do not want to disappoint him.

dpflan(285) 5 days ago [-]

Indeed, schooling includes the influence of the parents, not pushing for helicopter parents, but engaged and supportive.

matt4077(1176) 5 days ago [-]

I doubt there was ever much debate that parents are the single most important factor for children.

It's just that, from a policy perspective, it is extremely hard to actually have any impact on that behavior. It's basically social work, a concept for which there is essentially no money at scale in the US.

michaelgiba(10000) 5 days ago [-]

Great point

b_tterc_p(10000) 4 days ago [-]

I like the concept. I'm not sure their stats check out. Maybe someone more knowledgeable can chime in. The article suggests a lot of the kids here were bottom percentile performers on standardized tests (literal 1th percentile).

I do wonder if the test is valid at scores that poor. Could it be that the bottom percentile is just the kids who guessed randomly and had bad luck, as opposed to, say, the ninth percentile who guessed randomly and had good luck? How likely would it be for a bottom percent student to stay bottom percent with no special schooling? I really just don't have a good sense for what it means to strive for 10th percentile performance on tests like these.

jonknee(1206) 4 days ago [-]

> The 90 percent of I Promise students who met their goals exceeded the 70 percent of students districtwide, and scored in the 99th growth percentile of the evaluation association's school norms, which the district said showed that students' test scores increased at a higher rate than 99 out of 100 schools nationally.

It sounds like almost all the students met their individual goals which means they are making progress no matter what percentile they started in. They're not out of the woods, but this is a population that previously had only ever seen bad results from the school system. It's a small experiment, but it's hard to spin the early results as anything but a good thing.

collective-intl(4105) 4 days ago [-]

I think there is a major flaw which you are getting at.

If you take the students who performed at the 10th-25th percentile in any school in one year, on average they would do better the next year because of reversion to the mean.

The way to understand it is that the population they chose did so poorly on their test last year, it is likely that they did worse than they usually do. They are more likely to have had an off year.

For example, the NYT article mentions the girl who missed 50 days of school the previous year. It's more likely she won't miss so many days this year. That's reversion to the mean.

IMO, that throws all the results into question, as you would expect them to do better already.

In general, there are no panaceas in education. Any school which is claiming really great results pretty much never holds up. We've had decades of these articles with experts trying to figure out how to achieve better educational outcomes, and very few can be isolated. Even Bill Gates tried for a while.

Anyone who studies this stuff seriously will tell you educational outcomes are mostly based on innate talent.

luckydata(4108) 4 days ago [-]

The stuff that school is doing is really just common sense and the success they are experiencing is both a symptom that what we always known works... still works but also that we do a real shit job at education in this country.

This is not a problem you can fix with technology, the only technology needed is good food every day for the kids and parents (or guardians) that can be involved in their kid's education.

suddenstutter(10000) 4 days ago [-]

Seriously, its crazy how easily people are shocked by common sense being put to work.

supernintendo(3987) 4 days ago [-]

This story resonates with me deeply. I grew up in a lower income, single parent household to a family environment plagued with drug addiction and feelings of economic hopelessness. I was on track to drop out by my senior year of high school. My father (who was always in and out of prison) committed suicide my freshman year of high school. I was distraught but blessed. My English teacher at the time noticed what I was going through and got me involved in a program for at-risk kids called AVID [1]. It is because of this organization that I was able to not only graduate from high school but also university and go on to have a successful career in software development.

Our society pays a lot of lip service to youth being the future but we don't actually do much to help them succeed. The person you might call a loser or criminal was once a child who had the opportunity to become a shining example of what our society can produce. These children are being lost at some point along the way. Once they become adults we shame them for their life decisions but what are we actually doing to solve the problem?

When you say, 'hey, 90% of these students are outpacing peers in their district based on academic indicators', I say maybe LeBron is onto something. Let's adopt this approach, scale it and study the results.

[1] https://www.avid.org/

esalman(3322) 4 days ago [-]

The system is wired to goad students to fall into debt to sustain a decent living standard. I am transferring school from a fairly low-cost city (Albuquerque, NM) to Atlanta. My own issues are not comparable to these kids, but about 60% of my stipend is potentially going into rent, and more than 20% into fees, so I have started to doubt my decision to continue grad school.

chriselles(10000) 4 days ago [-]

Thank you for sharing your experience.

This looks pretty cool!

gigatexal(4009) 4 days ago [-]

This is so great! I hope you went back and thanked that English teacher. They don't get the thanks they deserve most of the time and they sure as hell don't get the pay they deserve. Teaching is a noble calling and I'm glad someone saw you and took the time to care.

vvpan(3914) 4 days ago [-]

AVID has information on their website but in your own words could you describe how it helped you? Very curious.

thatoneuser(10000) 4 days ago [-]

I grew up similarly. I've seen program after program come into my home town just for the administrators to pack up and leave town 1-2 years in, blaming everyone but themselves. It's very refreshing to see someone who 'wants to make the world better' actually doing it rather than paying lip service with the real goal of boosting their ego.

Hope this is legit. Tip my hat to LeBron for being a real hero to the underserved.

not-a-duck(10000) 4 days ago [-]

It seems this small success is indicative of a much larger failure - a failure for the greater educational system to do good to all equally who enter in. The system is rigged.

_lessthan0(10000) 4 days ago [-]

Top comments, shock people cant feel good about something because it didnt happen to them. I was hoping for some uplifting commentary but like all other social media sites HN is going down a bad path.

Retra(10000) 4 days ago [-]

Maybe our own emotions aren't the subject of importance here?

throwaway5752(3350) 4 days ago [-]

I think you could lead by example and post your own uplifting commentary, and that would be more effective.

spraak(3892) 4 days ago [-]

The predominant attitude is cynical

warent(2650) 4 days ago [-]

The article was only posted a couple of hours ago. Give it some time; the comments almost always come out really great. Scrolling through them now on my end the top comments currently look uplifting or scientific. I'm not really seeing the negativity you're referring to.

tomhoward(1118) 4 days ago [-]

Can you link to the comments you think are so terrible? Most people seem supportive.

basetop(10000) 4 days ago [-]

As a minority, I hope this isn't another 'minority schools do well turned scam'. Seems like we have these 'amazing successes' turned to 'scams' every couple of years.

https://www.nytimes.com/2018/11/30/us/tm-landry-college-prep...

If it is a success, why turn it into a national story? Doesn't that put more unnecessary pressure on these schools and kids to 'succeed'? Wouldn't that put more pressure on these schools to cheat if expectations aren't met?

I don't understand why this is a national news story. Do the kids benefit from the extra national pressure? No. Do the schools benefit? No. Am I wrong in thinking only lebron and the nytimes benefits from this story? Why not let these kids and schools succeed quietly? These underprivileged kids have enough hurdles as is, do they really the added burden of national coverage?

Someone1234(4109) 4 days ago [-]

> As a minority, I hope this isn't another 'minority schools do well turned scam'. Seems like we have these 'amazing successes' turned to 'scams' every couple of years.

It is a normal public school that is getting additional funding, and funding for specific additional programs (e.g. parental outreach, after school program).

Worst case scenario the funding dries up and it returns to being a completely normal public school with normal levels of public funding.

sct202(10000) 4 days ago [-]

These results at least aren't wildly successful. The kids came in really below average, and now are still below average but by less after a single year, which is promising but not like unreasonable.

thatoneuser(10000) 4 days ago [-]

Because if this truly is a success then we want to spread the concept as far as we can as fast as we can. It can receive national attention and not harm those inside if done right.

shhehebehdh(10000) 5 days ago [-]

I didn't see anything about this in the article, so I ask here in the hopes that someone more knowledgeable can comment: how are they controlling for selection bias? Is there any way to select into or out of this school, or is it purely the standard districting system? Even if it's the latter in many other places it's still possible to move into the area. Do we know to what extent that has been happening?

Edit: Oops, I was tricked by an advert. The article continued to say that the students were admitted by lottery. The only question I'm immediately left with then is whether they had to enter the lottery, or whether it was automatic? And was their admission contingent on their parents' willingness to participate in these extra classes?

https://fredrikdeboer.com/2017/03/29/why-selection-bias-is-t...

eyeinthepyramid(10000) 5 days ago [-]

From the article:

'I Promise students were among those identified by the district as performing in the 10th to 25th percentile on their second-grade assessments. They were then admitted through a lottery.'

icelancer(3968) 5 days ago [-]

There is selection bias for sure; except it works in the opposite way regarding successful outcomes. IPS takes low-performing individuals and puts them in a lottery system.

This differs from a charter school or other alternative schools that skirt responsibility of special needs kids... but it should bear mention that IPS is not without controversy. The lottery system causes a lot of strife in eligible-but-unpicked individuals and also costs the taxpayers a substantial sum; LeBron does not cover 100% of the costs, or even a majority of them.

https://en.wikipedia.org/wiki/I_Promise_School

ThomPete(636) 5 days ago [-]

Charter schools use the lottery method, at least the one my two sons are on (success academy) its a great school with great success but a lot of it is obviously engaged parents (such as even entering the lottery)

ghda(10000) 4 days ago [-]

Segregating students by ability is one of the easy, cheap, sensible ways to improve outcomes that almost certainly works. Taking slower students out and teaching them at their own pace should benefit those taken out and those left behind.

Unfortunately, it's my understanding that this sort of thing winds up politically impossible in most parts of the US, as segregated-by-ability classrooms wind up looking uncomfortably like segregated-by-race classrooms.

krastanov(10000) 4 days ago [-]

On the other hand, Finland is often touted as having a great educational system and they very explicitly do not segregate by ability, rather they try to lift all boats. The problem of 'smart kids getting bored or not having opportunities to excel' somehow does not seem to be an issue (I am a bit confused that this is the case).

randomacct3847(2536) 4 days ago [-]

Isn't it obvious that funding public schools with local property taxes is what has created this messed up system where your zip code has a disproportionate impact on the quality of your education?

bzbarsky(1749) 4 days ago [-]

Just about every single state has state-level funding for schools that acts to even out those disparities.

Worse yet, spending (per pupil) on its own turns out to not be a very good predictor of quality of education. It doesn't even seem to be a great predictor if you control for parents' SES, from what I can see for various school districts in Boston's suburbs.

Put another way, a number of quite distressed school districts spend more than various 'good' school districts, with much worse results. DC public schools are a poster child here, but not the only example by any means. So it's not just a matter of funding levels at all.

somethoughts(10000) 4 days ago [-]

I've often wondered about the effectiveness of donations to supplemental after school/summer time tutoring/coaching in under-performing school districts [1][2] versus donations to full time, private charter schools such as this one.

It seems supplemental after school/summer time solutions would build on the existing public school system and fill in the gap between 3pm-6pm and during summer where they are likely to be less supervised. It'd also be more scale-able to more children. I also imagine it also would produce less angst among public school teacher unions as its more supplemental to them versus replacing them.

[1] https://en.wikipedia.org/wiki/Silicon_Valley_Education_Found...

[2] https://svefoundation.org/get-involved/events/annual-dinner/

jrumbut(10000) 4 days ago [-]

This is not a charter school or private school, it is a public school that received additional money on top of its usual funding from Lebron James.

ngngngng(3786) 5 days ago [-]

I devote a lot of my time thinking about how to improve education. I love so much of what this school is doing, much of it being things I hadn't considered, since my education had far far different problems than these kids have.

Out of everything in the article, what impressed me the most was the roleplaying with the intervention counselor. What an incredible way to help kids learn how to behave and assimilate into society. I think that role playing everyday situations should be a part of every students education.

johnsimer(10000) 4 days ago [-]

I also think a bit about improving education.

What are the best things from your thoughts/research someone or society as a whole could do to improve education?

exabrial(4047) 5 days ago [-]

Amazing to see a privately funded school do so well, especially one that targets at risk children. I hope the lessons learned here inspire more private investment in education.

b_tterc_p(10000) 4 days ago [-]

It's mostly public, plus some private extras

dhritzkiv(10000) 4 days ago [-]

Or that it brings attention to neglected demographics and inspires greater demand and political will for improved public funding and resource allocation.

Privately funded white knighting should not be the answer.

tracker1(4099) 4 days ago [-]

It's more of a hybrid... It's publicly funded, per student similar to other schools in the area, but additional funding from James' foundation. Of course general school funding should probably be split with a bit more towards the lowest 20-25% on performance and the top 10-15% imho.

I wish more communities were more involved. I've often thought it would be great if every parent were able to participate in one day of class a month how much that could improve things overall.

mindfulplay(10000) 4 days ago [-]

It's a shame that a private citizen has do this. Reflects poorly on our country that the economically disadvantaged really do not have a choice or a voice.

And these are not people who choose to be poor or incapable. They really don't have any other option.

whatshisface(10000) 4 days ago [-]

You could also say that it reflects well on our country that a private citizen was able to realize that something needed to be done differently, and then make the change themselves instead of hassling through a decades-long 'change the bureaucracy' adventure while competing with seven other equally motivated individuals who also want to change the education system, but in completely different ways...

nine_k(4085) 4 days ago [-]

OTOH this is great that a private citizen can see an opportunity to give kids better education, and actually follow through that plan, making an example to replicate and build upon.

In many countries, this is not a given: either directly forbidden or infeasible due to red tape.

dannycastonguay(10000) 4 days ago [-]

It's a moment of pride that this country has citizens like him. It reflects well on the values of some of the celebrities like him, and hopefully it is inspiring others, thanks also in part to good journalism. It also didn't happen without the efforts of the kids who had to believe it was possible, put in the hours, and demonstrated proficiency on the tests.

spaginal(10000) 4 days ago [-]

It's not a shame. At a time in this country it was expected that citizens of means stepped into the gaps that government either couldn't go into or couldn't do well, and some of our best institutions come from this philanthropy.

I applaud Lebron.

cageface(3137) 4 days ago [-]

All the current left/right political dogfighting and cultural warfare is a smokescreen for what's really going on, which is a class war. And the rich are winning. Unfortunately it seems increasingly likely that in their limitless greed they are destroying the fabric of the society that made it possible for them to become so wealthy. Sadly, history seems to keep repeating this theme down the centuries.

thatoneuser(10000) 4 days ago [-]

Um so what about the endless govt programs out there that were supposed to achieve things like this brat have failed?

It's not like our country doesn't do anything - it's a very complex problem. There are efforts and they tend to have mediocre results at best. I've personally worked in some of these programs and the reality is govt just isn't well suited for the nuance and individual attention/thought different disadvantaged groups require. I'd wager that if you have ideas about how to fix these issues they'd be similarly flawed.




(519) Its butterfly keyboard design has failed, but Apple has yet to admit its mistake

519 points 1 day ago by aaronbrethorst in 28th position

theoutline.com | Estimated reading time – 10 minutes | comments | anchor

Recently, in a discussion about Apple's terrible butterfly keyboard design with a source who wished not to be identified and who definitely does not work for Apple, they noted I had bought a 2017 MacBook Pro and gotten rid of it after a little more than a year due to the now-notorious problems they have with something as simple as "a piece of dust" causing stuck or dead keys. "So, just curious, why did you buy another one?" she asked, referring to the 2018 MacBook Air I'd gotten six months after selling the Pro.

I didn't have an answer ready, but was impressed that she asked, because I felt largely like an idiot who deserved it. After selling back my MacBook Pro to Apple for about two-thirds of what I paid, my 2013 MacBook Pro (which I kept around even after buying the new Pro; yes, I have a lot of computers) that I returned to using started to show signs of age. It stopped recognizing its battery, even as the keyboard still worked flawlessly. I cast around to every other possible laptop solution — Chromebooks, Windows laptops, a Surface — before deciding I was too married to the Apple ecosystem to leave it.

A pettier side of me felt I couldn't sit by and watch Apple claim to have fixed a problem I sensed it had not substantially addressed at all. It wasn't enough to read about it; I had to go back in and continue to personally suffer the badness of their cursed keyboard design in order to live out a kind of truth, even if it meant certain annoyance and a waste of my money and time. Fueled by equal parts irrational hope I knew I shouldn't trust and deep skepticism to which I should have listened, I bought the 2018 MacBook Air.

Sure enough, a couple months into owning this computer, the keys started to act up. As before, problems would come and go; the E or B key would be unresponsive for a day or so before whatever was jamming them up mysteriously went away. The spacebar was the worst offender. For a long while it doubled spaces from a single keypress, but only sometimes. Finally, it seemed to get something lodged under it big or annoying enough that it couldn't shake itself loose, and I had to pound it to get a space out of it; for two days, my sentencescame outlikethis. I made a Genius Bar appointment.

After verifying my problem, the Genius issued her judgment: "So the keyboard on this computer is very sensitive," she said. "And when debris goes underneath the keyboard, it can become unresponsive. So I'm just going to clean it out, I'm going to take it to the back and blow air in it and we'll see if it gets better. Ok?" Apple's suggested fix of cleaning its easily fallible keyboard with canned air theoretically doesn't require any special equipment or put the computer in a sensitive position, it's not clear why Apple's service people aren't allowed to just do it out in the open. The Genius took it away, and a few minutes later brought it back, demonstrating its now-functional spacebar and slightly more responsive keys.

It took 1hr+ on the phone to arrange a MacBook Pro keyboard repair (purchased from Apple with AppleCare).

When they say "We're aware that a small number of users are having issues..." with Mac keyboards, that smacks of knowing minimization.

It's HARD to report and get fixed. pic.twitter.com/ETXQng6QDE

— Ryan Begin (@ryan) April 9, 2019

1. "Apple never does X" rarely precludes them from doing something. They often surprise us.

2. I don't think they've ever had a product flaw of the magnitude and severity of the butterfly keyboard.

They have ample motivation to correct this years-long mistake. https://t.co/7cEfQrJw3z

— Marco Arment (@marcoarment) April 8, 2019

I asked her how she cleans out the keyboards when people bring them in. She positioned the monitor at a right angle and stood up the computer on its side, and mimed a back-and-forth sweeping spraying motion from the left side (now top end) of the keyboard down to the right (now bottom).

Does Apple sell canned air? No, she said; she recommended Best Buy or Staples. She told me not to eat over the computer, watching me for a reaction to this somehow insanely unrealistic yet also eminently reasonable prospect. No one aims to take meals over their computer; I'd go as far to say no one ever does it except out of necessity. And when that necessity arises, it doesn't seem like an unreasonable ask that the computer would be able to tolerate a few crumbs being scattered on it. But I suppose I am in the wrong for expecting my $1,600 computer to be able to cope.

It's clear at this point that I am extremely not alone in having this problem; developer Marco Arment has written extensively about it, the Wall Street Journal's Joanna Stern recently wrote an entire column with authentically dropped e's and r's that were the result of her new (broken) MacBook Air keyboard. (Stern, notably, actually managed to get a real apology out of the company, unlike me. But Apple has yet to match those words with tangible action, like, for instance, making notebooks with working keyboards and offering them as replacements to all of the people who unwittingly bought computers that were bound to break.)

As I wrote a few months ago, the latest redesigns of the butterfly keyboard have failed to address the responsiveness problems; posts from Apple customers on support forums about dead or repeating keys have kept up at a very steady pace. What's changed is that Apple has lured a whole generation of loyal MacBook Air fans, like Stern, into subjecting themselves to this hamstrung version of a keyboard.

My Genius claimed that if this incident occurred multiple times, I might be eligible for a full top-case replacement, like I got with the first computer that had this problem back in September 2017. She gamely checked the replacement program page to see if the MacBook Air was listed as eligible (it is not; the model is less than a year old and so covered by Apple's standard warranty). But she avowed that the Airs seemed prone to the same issues as all the other computers that were now covered, and would likely be added to the program.

What year is yours? I'm still happy as a clam with my 2014 MBP. But I'd have upgraded last year if not for the keyboard concerns.

— John Gruber (@gruber) February 27, 2019

I'm really hoping my 11-inch MacBook Air lasts long enough for Apple to fix all of its keyboards. https://t.co/oHuxaM7hV3

— Dan Moren (@dmoren) February 27, 2019

Apple made its reputation on support; the Genius Bar, at which customers could get in-person help right in any of the retail stores was in stark contrast to 15 years ago, when the best help you could get for any Windows computer was a tech-support phone line staffed by remote call centers. Even then, that setup was meant to deal more with hardware problems and not, say, viruses or corrupt software. Apple Geniuses, by contrast, handled everything seamlessly.

Fast forward to now, and Apple appears to be buckling a little under the weight of its operation. The Genius appointment-making page has slowly evolved from pointing people directly to in-person appointments to encouraging phone or remote support. A couple of years ago, it started offering support appointments with third-party service providers through the same interface that it offers ones with the Genius Bar, an endorsement of operations outside its normally airtight vertical integration I couldn't have even imagined 10 years ago.

Apple still claims that only a "small percentage" of people experience trouble with their keyboards. But having now heard the idea of a "sensitive" keyboard, I'm not sure I will ever get over it. No one has had to think this hard about keyboards in decades, at least before Apple went in and messed with them. Now the complaints are reaching critical mass and I have Microsoft emailing me offering to fly me out to its Redmond campus so it can walk me through its "seven elements that every keyboard needs to create a great typing experience." I have a hard time imagining what those seven elements are, because I get stuck at two: 1. Produces the characters 2. That I intended to type. These two attributes are also, incidentally, what the biggest and most-valuable tech companies in the world are somehow grappling with anew.

Maybe it's because I am getting old and cranky, but my late model Apple phone and laptop are harder to use, less efficient, less fun— inferior products compared to earlier versions. (Except for the camera.) For me Apple is going backwards.

— Jay Rosen (@jayrosen_nyu) April 14, 2019

In the name of fairness, the Genius-sanctioned canned-air cleaning worked, at least for now. It would likely not resolve the problem of debris that is actually stuck, or broken plastic pieces inside the assembly. But I dread the Overton window shift that Apple now appears to be attempting to push, which is that its customers and their crumbs and dust and bad habits are to blame, and should bend themselves around the "sensitive" keyboard, keep canned air (not supplied by Apple itself) on hand at all times, as if this is a problem we've always had, and not one Apple singlehandedly created with a nearsighted design.

I am stupid for buying another one of these computers, but only as stupid as any of us are for learning to love these dumb tech products on their merits, becoming beholden to the system, and then having a big commitment out of which to dig ourselves (actually not very stupid at all). I'm far more inclined to subscribe to a different argument, which is that Apple is the stupid one for not only trying to reinvent a solution to the extremely solved problem of how to make a working keyboard, but continuing to pretend that, four years and four iterations later, it hasn't utterly failed. The company declined to comment on the record for this article.




All Comments: [-] | anchor

martinpw(10000) about 19 hours ago [-]

The Benjamin Button review of improvements in older Macs compared with newer ones:

https://blog.pinboard.in/2016/10/benjamin_button_reviews_the...

m463(10000) about 11 hours ago [-]

I think my 2007-ish macbook pro had the best keyboard of all. Light action concave keys

I don't think that the superiority of concave keys gets the attention it deserves.

- the keys fit your finger better - the force of the keypress is evenly distributed across the surface of your finger, especially as the key bottoms out. New keyboards deform your finger pad as you press down and are less comfortable.

- the concave shape helps you locate the keys and center your fingers by feel. This leads to more accurate finger placement initially and as you type. You will type more accurately.

thaumasiotes(3611) about 10 hours ago [-]

A comic from the same month making the same point: http://chainsawsuit.com/comic/2016/10/29/the-new-macbooks-ar...

'We want to make a laptop so thin you'll close it and never find it again.'

tzakrajs(4022) about 18 hours ago [-]

Guess this guy was pleased with his slower processor too.

Causality1(10000) about 22 hours ago [-]

I take it with these amazing apple keyboards you can't just pop the keycaps off with a screwdriver to clean under them and pop them back on?

bluedino(2181) about 20 hours ago [-]

You can very, very carefully remove them with a credit card.

However, they are VERY easy to damage. And even if you get the key off without breaking it, the problem is the butterfly mechanism is very fragilse and goes 'flat', it's not so much getting dust specks in there.

stunt(4089) 1 day ago [-]

X1 Carbon and XPS 13 are still the best hardware out there if you are using Linux.

zdragnar(4112) about 24 hours ago [-]

My experience with the XPS 13 has been pretty meh. Most things work reasonably well, except for using USB-C for an external monitor and the keyboard isn't holding up as well as I'd like. My biggest gripe is the wireless- there's a room in my house where every other laptop gets plenty sufficient signal (mb air, chromebook, windows) BUT the xps gets such a weak signal it's almost impossible to browse the internet.

If I wasn't worried about bricking it and all the config tweaks needed to get things like sleep and power management to work, I'd throw linux on the air and call it a day.

LeonM(4021) about 24 hours ago [-]

X1 Extreme is a better choice imo, up to 64G (replaceable), 2 m.2 slots, a GTX1050 for casual gaming or ML and a 4K display. I have one on order.

There is also the P1, which comes with a Xeon (!) option and Quadro P1000 graphics. The market for the P1 is pretty specific, but if you need those nVidia certified drivers, this is the one.

Edit: obviously you wouldn't run Linux on the P1 if you bought it for the certified drivers. It would still work though.

hsbaut76(10000) 1 day ago [-]

T480s seems like a more balanced machine than the carbon imo. Xps have quality control issues.

dontknowme(10000) about 21 hours ago [-]

I've been using company a provided X1, X1 Yoga 1st and now 3rd gen.

I had several issues with the arrow keys on the X1 Yoga 1st edition.

Tactile feedback while pressing but without actual keypress action on the up arrow. We had a lot of 15 of those, and the issue was present on 8 of these.

3rd edition now has the same issue on the left/right touchpad buttons (the physical ones). If you press don't press on the dead center of these keys, you feel the physical feedback, but there's no keypress action. Again, we got a batch of 15 of those, out of which 10 had this issue.

For the 1st edition the shape of the key was the issue (slant on the membrane), while for the 3rd the key is just too soft (plastic bends before pushing on the membrane).

This is the most brutal behavior a button can have. What's the point of physical feedback if it's broken?

The monitor backlight on all the X1 laptops we have (we're using them since 5 years) developed bright spots. The black level of the screen degrades pretty quickly too, settling after 3 months of normal usage. If you turn on a 'fresh' X1 it has significantly more contrast than what you'll typically get a few months on.

Overall, it's still a great line (up and beyond than the HP 'elitebook' series), but the keyboard and button issues are really irking me up and the Yoga 3rd edition IMHO has a worse keyboard then the 1st overall.

I sadly cannot compare with the XPS series as I never used the series long enough to judge.

Rudi9719(10000) about 24 hours ago [-]

I don't understand the failure of the butterfly keyboard? I haven't had any issues with my 2017 macBook and it seems to keep kruft from getting under my keys (A problem I had with the older macbook keyboards)

I also live on my macbook so it's constantly in use, and much to my displeasure, around food.

jswizzy(10000) about 23 hours ago [-]

So your entire comment is based purely on anecdotal evidence?

TN1ck(3994) about 23 hours ago [-]

You seem to be a rare case. I'm also a heavy user and got keys stuck a lot and actually broke two keys (I did nothing out of ordinary). I certainly like the feel of the butterfly switches, but they really are incredible frail.

wildlogic(4072) about 23 hours ago [-]

I was happy with the feel of the 2017, however after a few months a couple keys got stuck to the point of it being unusable. I took it in for repair at the Apple Store, and after repair, touching the ID sensor would cause the machine to freeze. Soon after, more keys started getting stuck and I just put the thing in a drawer and pulled the 2015 back out, which is what I'm typing on to this day. I've been using Apple computers exclusively since the lciii in the early 90s, but my next laptop will not be a Macbook.

carlosrg(3839) about 21 hours ago [-]

Same here, 2016 MacBook (so it's not even the second-gen keyboard). Still works fine, although I would prefer more key travel.

steev(10000) about 23 hours ago [-]

You are very lucky. I had three students in my office hours recently and of the four of us, all with butterfly keyboard MacBook Pros (some with the updated keyboards), three of us have had keyboard issues such as the spacebar double registering presses or other keys not registering a press at all.

Within my family, everyone that has a newer MacBook Pro has had keyboard issues.

I bought a used Lenovo X1 off ebay for $300 and that has become my daily driver, despite getting the keyboard repaired on my MBP. It doesn't get close to the same battery life as my MBP, and waking from sleep is a little iffy sometimes, but in terms of reliability and usability it far exceeds my experience with the MBP.

ascii_only(10000) about 23 hours ago [-]

Nearly half of the third-gen Apple butterfly keyboards at Basecamp have failed. There are few other personal reports with similar percentages.

Second problem is that it will cost a lot of money to fix your keyboard out of warranty.

zenexer(10000) about 23 hours ago [-]

I was just like you until a couple weeks ago. No grime in the keyboard, heavy use, took it with me everywhere, used it every day since 2016. (That being said, it was/is in pristine condition--no scratches, dents, or drops.)

But one day it just happened. Nothing to provoke it--hadn't been eating over it, hadn't been treating it any rougher than usual. E key started bouncing (repeating). Rarely at first, but it got worse over the course of a week. Canned air didn't fix it--had to ship it out.

I can't really say it was a bad experience, though. They handled everything without issue. I walked into the store--no appointment--and they took a look at it immediately. I didn't pay anything despite being out of warranty and without AppleCare. Had it back in my hands a few days later.

Personally, I love the feel of Apple's butterfly keys. I like the short travel distance and the satisfying 'click' they make. I've had plenty of similar issues with scissor switch keyboards; it's just that they're easier to repair. Pop off the key, give it a good blow, and it's as good as new, though you might end up inhaling some ancient crumbs in the process. Allegedly, that usually doesn't work with butterfly keys, for whatever reason, and they're hard to remove without breaking.

venantius(3392) about 23 hours ago [-]

Then you're lucky. We have two 2017 MBPs and one has a broken R key. Broadly speaking (a) the failure rate is really high and (b) the fact that it can only be fixed by a complete front plate replacement makes this a punishing error. In the old days if a key broke you could just replace that key most of the time. Here you need to replace half the chassis, which necessitates a trip to the Apple Store.

benologist(1015) about 21 hours ago [-]

Not admitting your mistake this much is simply lying. They have a severe honesty problem admitting their hardware faults. How is this legal? Imagine if car companies, instead of recalling their broken shit, just lied and quietly tried to fix it on next year's model...

When the free keyboard replacement program shuts down, and Apple still denies the problem, and Apple happily charges you $795 every so often to fix their mistake over the life of the machine, will it be fraud?

linuxftw(10000) about 19 hours ago [-]

Many poorly designed car parts fail prematurely just outside of warranty. If there's no safety concern, there's no recall.

Tempest1981(4058) about 11 hours ago [-]

Where is the threshold between 'marketing spin' and lying?

ajma(4049) 1 day ago [-]

hmm... am I alone in being someone who likes the butterfly keyboard?

SenHeng(3865) 1 day ago [-]

Been using them for several years now and I like them a lot too. I have the v2 MacBook where the 'b' key got stuck once but I managed to fix with some can of air spray. Have had various MBPs at work and most of been fine too.

I've always had a habit ofregularly clean my keyboards too which I think may have helped reduce the amount of dirt getting stuck underneath.

nihonde(10000) about 24 hours ago [-]

No, I also like it. I'm not saying the complaints are unfounded, but they're not my issues. I broke the command key after about a year of heavy use, but it took about five minutes at an Apple Store to have it replaced. I've broken keys on every laptop I've ever had, including the venerated IBM/Lenovo Thinkpads. (Don't even get me started on how many problems I had with that horrendous trackpoint thing.)

Since then, my only complaint about the MBP is the arrow key arrangement, which I will never adapt to, and the goddamn useless, laggy yet hypersensitive touchbar.

When I have to use other people's keyboards, I dislike their long travel now.

kevinherron(10000) about 23 hours ago [-]

No, I like mine also. I'd be annoyed too if a bunch of my keys were sticking, but they aren't, so I guess I'm lucky.

auggierose(3491) 1 day ago [-]

no, you are not

legohead(4109) 1 day ago [-]

as long as it lasts, I enjoy it.

Dylan16807(10000) 1 day ago [-]

Not at all. Even a lot of people that hate it, liked it for several months until it broke.

That doesn't say much about whether it's a good idea overall, though.

jackconnor(4112) 1 day ago [-]

I also hate it, but maybe it's like music and it's totally subjective.

ummonk(4069) 1 day ago [-]

I haven't had too much trouble with it aside from a poorly responsive spacebar at one point. I still dislike it though, and don't really find any value in the extra thinness.

sonofaplum(10000) 1 day ago [-]

the article isn't about the feel of the keyboard, its about the reliability. Unless you are saying you like stuck keys?

wmeredith(2770) about 23 hours ago [-]

No, but we're just quietly working and using our macs. There's obviously a problem, but it's not the end of the world.

kalleboo(3856) 1 day ago [-]

I like the sharpness of it and don't mind short travel.

I hate when I have to send my laptop in to Apple for 10 days to repair it because a speck of dust entered it.

So now I use a 'keyboard condom' on it to keep dust out (despite Apple's explicit support article saying NOT to use one) and it's turned into the worst keyboard since the ZX80

sbuk(2895) 1 day ago [-]

Nope.

dsego(613) about 21 hours ago [-]

I liked it when it worked. It's all about what you are used to I think. When typing on a mechanical keyboard for a while, even a thinkpad keyboard will seem like pressing on cardboard. But mine broke a lot and I managed to snap off the key caps, those are really fragile.

linguae(4114) 1 day ago [-]

My company bought me a 2018 MacBook Pro about two weeks ago and so far I'm enjoying the keyboard; I like the tactile feel (I typically use mechanical keyboards whenever I'm at a desk, and so the tactile feel of the MacBook Pro's keyboard was a pleasant surprise), and there's more travel than I thought there would be. I remember trying the butterfly keyboards back in 2016 and hating them, so either something must have changed or my preferences have changed.

With that being said, I've only had this MacBook Pro for two weeks, and so we'll see if it remains reliable over the next few years.

dmitriid(2005) 1 day ago [-]

I didn't like it first. Then I became ok with it.

But it's so so so infuriatingly shoddy. Cmd fell out on my previous laptop. On my current one most of the keys feel wobbly. Coupled with accidental brushes againt touchbar and the unnecessarily enlarged touchpad frequently failing at palm rejection this ends up being a very subpar experience compared to earlier models.

ricw(3481) 1 day ago [-]

I love it. I recently had to temporarily go back to a 2014 MacBook and it just felt mushy and imprecise. The new keyboards feel sharp and way more precise. Just simply better.

camhenlin(3950) 1 day ago [-]

I love them. The newer keyboards have a very satisfying clickiness to them. I guess I'm lucky to have a 2018 model that's not had keystroke issues however

eeeeeeeeeeeee(10000) about 24 hours ago [-]

I think you'll find a lot of people that like the way the keys feel and travel — like me — but are really tired of not having a reliable machine.

I've had to take multiple different machines in for repair. And each time I have to tell them to replace the entire keyboard, but they always want to blow it with an air can. Utterly infuriating.

thought_alarm(4100) 1 day ago [-]

The arrow keys are one of the most essential parts of a keyboard for people who write a lot of code or otherwise do a lot of copyediting.

This type of user needs to be able to lock on to the arrow keys repeatedly and unconsciously without fail throughout the work day.

Apple's keyboard designers understood this and got it right for over 15 years. But that changed in 2016.

As of 2016, Apple's new keyboards provide zero affordances for these kinds of users and are completely unsuitable for real work.

You couldn't pay me enough to use one of these keyboard for writing code, and I certainly wouldn't pay my own money for one.

cmiles74(4022) about 19 hours ago [-]

A co-worked of mine convinced the company to buy them a MacBook (the first the company has ever purchased) to replace their aging Dell laptop. They have been doing more and more work on the company's iOS app and the aging Mac mini the company had purchased needed to replaced anyway.

After having the laptop for just over a year, they called me upset as the laptop would no longer power on. After some gentle interrogation they admitted that they had eaten a snack near the laptop and that, perhaps, a small amount of seltzer had 'hit the keyboard'. I had them bring it to the Apple Store and Apple refused to repair it under the warranty as it had 'water damage.' I had asked the company to purchase the extended warranty but that had been lost in the shuffle. My co-worker ended up shelling out ~$1000 out of pocket as they needed to get work done and they had very real feelings of guilt. After all, they did _eat_ near to the laptop.

I agree no laptop should get wet, or have a dollop of guacamole sauce splashed across the keyboard, or have the display sprayed with a fine mist of Coca-Cola as someone laughs and an unexpected joke. But these things happen and I don't think Apple is helping anyone by making laptops that can't withstand these common mishaps.

dathinab(10000) about 19 hours ago [-]

Honestly any Laptop you pay 1200+ on should 'survive'(1) you _splashing_ Coffee on it, not just a view drops of liquid. It's totally technical doable in a number of ways and the only thing in its way is because you are fanatic about getting your laptop another mm thinner and/or profit margins.

(1): Survive in the sense that it's in general fine and you at most have to replace some easy to replace not super expensive part. I for example splashed ~1/2L of Orange juice on my Laptop, and it was fine except for the keyboard (due to it being orange juice, water probably would have been fine). Removing the keyboard so that I can continue working with an external one took ~10min for an inexperienced person. And the replacement Keyboard had a price of ~80 Euro with Backlight 40 without, also easy to put back in.

G4BB3R(10000) about 19 hours ago [-]

I am sorry for what happened. But I just can't understand people who drink and eat at the same table with electronics. It is hard to happen, but eventually it will and will damage everything, so why doing that, this upsets me when I see people taking risks for nothing. Foods and drinks shouldn't stay outside kitchen.

camelNotation(10000) about 18 hours ago [-]

Not only are Thinkpad keyboards the most comfortable and satisfying on the market, you can literally pour water on them and it will drain out the bottom of the laptop without ever touching internal components.

After using my Thinkpad for about six months, I can't go back. The keyboard is amazing, build quality is rock solid, screen is 4k with 100% RGB, and you can actually upgrade the thing.

I buy a laptop because I need a mobile keyboard experience. Why on earth would I buy a laptop with a keyboard that feels awful and can't handle normal use?

Honestly, if I absolute had to have OSX, I would just buy a Mac Mini and deal with the stationary aspect of it. Macbooks are just that bad. If I desperately needed portable Mac, I would probably even be willing to deal with iOS on the iPad pro over a Macbook.

alexhutcheson(4003) about 19 hours ago [-]

Is it normal for companies to expect employees to pay for accidentally damaged equipment? At everywhere that I've worked, some accidental damage was treated as a cost of doing business, and IT would replace it.

TheOperator(10000) about 15 hours ago [-]

I think laptops SHOULD be water resistant actually and that it's bullshit to not be able to eat and drink at my computer. So I buy durable computers instead of the MBP. Probably the only think that's truly inadvisable is drinking stuff like sugary drinks and milk at the computer because that crap is really hard to clean. Drink water.

I've always liked Lenovo computers ability to take a spill.

AareyBaba(10000) about 17 hours ago [-]

I have ruined several keyboards (3 Apple, 1 HP) by accidentally spilling water/coffee on them. What I don't understand is how do these devices get completely destroyed by small amounts of liquid.

I've disassembled these keyboards, cleaned and dried them out to no avail. Is something getting shorted out permanently ? Is some sensor getting ruined ? what exactly is happening and is there nothing that can be done to revive them.

JustSomeNobody(3879) about 19 hours ago [-]

> My co-worker ended up shelling out ~$1000 out of pocket as they needed to get work done and they had very real feelings of guilt.

Ouch! Really? What kind of company wouldn't pay for this repair? And isn't this some sort of legal issue? The laptop doesn't belong to your co-worker so they can't really authorize repair of it. Just a weird situation, that.

sf_rob(10000) about 16 hours ago [-]

I damaged my company 2017 MBP by dropping it ~24' when I caught the cord on my foot. The Genius Bar wanted me to pre-authorize $800 of charges before they would even look at it. I was a bit appalled that I couldn't pay any kind of diagnostic fee.

EGreg(1700) about 19 hours ago [-]

You can get one of these for really cheap... I use them to type in other languages

https://m.aliexpress.com/s/item/32774074005.html

scarface74(4022) about 19 hours ago [-]

I'm in no way defending Apple's keyboards and there are some real issues with tiny bits of dust completely ruining a keyboard, but normal laptops have always been susceptible to even a little liquid damage.

But if your coworker couldn't just go to his manager, tell them what happened and the company paid for it, that says more about the company.

ddingus(4071) about 19 hours ago [-]

I had that happen to my 2012 MBP.

Ordered replacement top case with keyboard assembly off ebay.

Took a very long day to take it all apart, transfer to new top, and reassemble.

Worked first time.

A few months later random keys would glitch.

I use it like a desktop when I need to now.

headsupftw(10000) about 18 hours ago [-]

By Bay Area standards, you work for a super shitty company.

thegayngler(10000) about 17 hours ago [-]

I guess Im just an outlier as I love the butterfly keyboard. It is certainly my favorite thing about the new macs. I still have my 2016 Mac and it is in prime condition. I am a light touch typer. Therefore I dislike having to mash my fingers down into the keys to register the click.

pil4rin(10000) about 17 hours ago [-]

Same!

theonemind(4101) about 17 hours ago [-]

I have a bit of repetitive strain injury so I try to use a mechanical keyboard with light force requirement and not bottom out the keys. Reducing travel and force when typing seems to help. I also like the butterfly keyboard a lot, so it probably works great for light typers.

lenocinor(10000) about 17 hours ago [-]

I like it too (2019 version). I hated it the first three weeks, but now it's great and I prefer it to my old 2012 MacBook Pro keyboard. Maybe I'll hate it again in six months or a year if it dies like many other people's have, but for now it's great, at least.

droptablemain(4057) 1 day ago [-]

I tried, but I don't really feel much sympathy for someone who buys a product with a 40% premium that's effectively a status symbol masquerading as a piece of hardware.

To be honest, I'm quite giggly over this whole ordeal.

mratzloff(3777) about 23 hours ago [-]

MacBooks aren't a status symbol anymore. You only tell yourself this to feel superior instead of considering that there are actual reasons people prefer macOS and, yes, the ecosystem over Windows or Android. Many of these reasons are cited in this thread.

carlosrg(3839) about 21 hours ago [-]

I'm willing to pay that 40% for the only decent desktop OS out there.

danieldk(2454) 1 day ago [-]

I tried, but I don't really feel much sympathy for someone who buys a product with a 40% premium that's effectively a status symbol masquerading as a piece of hardware.

Oh come on, this is getting really old. A lot of people (most Mac users that I know) pay the 40% premium because they want macOS and the Mac application ecosystem. Sure, there are people that will buy it as a status symbol, but that's a crude over-generalization.

Also, a Surface with about the same specs sells at about the same price at our local retailer. The Surface Laptop 2 with a Core i5, 8GB RAM, 256GB SSD is 1449 Euro, the MacBook Air 1409. There are some differences like a touch screen (Surface), Touch ID (MacBook), higher PPI (MacBook). But for all practical purposes, they are in the same class and have the same price. A similarly spec'ed Dell XPS goes for 1399 Euro without a HiDPI screen.

Dylan16807(10000) 1 day ago [-]

"seven elements that every keyboard needs to create a great typing experience." I have a hard time imagining what those seven elements are, because I get stuck at two: 1. Produces the characters 2. That I intended to type. These two attributes are also, incidentally, what the biggest and most-valuable tech companies in the world are somehow grappling with anew.

Oh come on. Having a keyboard that causes as little fatigue and soreness as possible over time is an unsolved and complicated problem. There's also noise to consider. And laptops don't have infinite room. These factors come after 'does it make the characters?' but they are not unimportant.

mwfunk(10000) 1 day ago [-]

She was being sarcastic. She wasn't saying that nothing else matters, she was saying that nothing else matters if it fails at being a functional keyboard.

kkarakk(10000) 1 day ago [-]

>1. Produces the characters 2. That I intended to type.

if it helps, the butterfly mechanism fails at doing that if even a tiny speck of dust/grit gets in the mechanism

muraiki(2869) about 22 hours ago [-]

I had slowly been moving into the Apple ecosystem, buying an iPad, iPhone, and even a HomePod and Apple Music. I was largely motivated by a respect for Apple's stance on privacy. Then it came time for me to buy a laptop and I encountered these numerous reports of Apple screwing up one of the most basic parts of the computer. I couldn't justify spending so much on a laptop plus AppleCare, which would only result in getting another keyboard that suffers from the same problem.

I realized that this is what Apple lock-in means, and now I'm leaving their ecosystem. Whereas before I would tell friends and family to just get Apple products, now I'll them to buy a Chromebook or Windows laptop. Maybe this is why I keep seeing the newly released Macbook Air on sale for $200 off...

geophile(2978) about 21 hours ago [-]

I am somewhat locked in to 'the ecosystem', having an iPhone and an MBP. But the badly designed apps, the disaster that is iCloud, and the new MBP keyboard (the broken butterfly keys, the moronic touchbar) have motivated me to escape.

It's not entirely feasible, even with my minimal exposure. So I've compromised. My daily driver is a Darter by System76. The hardware is good enough, the battery life is good enough, the keyboard is fantastic, and Pop OS is beautiful. I still have my failing MBP, but it is demoted to a station for syncing my iPhone. I rarely touch it. When it dies I will replace it with a Mac Mini. I don't want to move to the Google ecosystem because of privacy issues.

usaphp(1528) about 20 hours ago [-]

> I couldn't justify spending so much on a laptop

Apple MacBooks lately are being priced on par with the competition. You want premium screen and trackpad - you will have to pay just as much for a windows laptop.

deminature(4019) about 15 hours ago [-]

To play devil's advocate, I've fully adjusted to the butterfly keyboard design and find the travel of the older keys excessive and unnecessary work for my fingers. All my colleagues type 8hr+ a day on the butterfly keyboard without complaints.

While I have experienced the annoyance of debris getting caught underneath keys, it seems like a problem of refinement of the design rather than tossing it out entirely, as many seem to be advocating.

I'm not sure articles like these are indicative of general sentiment towards the design. People enjoying the keyboard have no reason to comment about it, because having an opinion that you enjoy the status quo isn't interesting or worth sharing.

makecheck(3877) about 9 hours ago [-]

They can't just "refine" it if their 3rd iteration still has problems and it's been years. They would have been better off reverting.

And honestly, I'm not sure I will trust them if they get up on stage again with another "all new" design. I want them to basically say "we went back to exactly the same design as our 2013 laptops".

wingworks(3970) about 11 hours ago [-]

I have a 2017 MBP and a few months in keys started to fail, I eventually got it repaired by 2 AASP, but had a TERRIBLE experience with Apple support (first repair agent they told me to send it to I later found out was known to be terrible and ended up breaking more than fixing on my laptop, I did my own research then and found a good AASP to get the first AASP work fixed).

All in all, it took over half a year to get it all fixed. I rarely have a need to go to Apple to get things fixed but this one incident really soured my view of Apple support.

Having said all that, since then the replaced keyboard has worked without fault since, so they must have put in a slightly refined one.

davidandgoliath(4100) about 18 hours ago [-]

I abandoned apple in Dec. of '17 after 5+ years in the ecosystem & went on a hefty soul search for what to use next. I knew the touchbar was sort of a warning flare that they were trending away from what I'd need out of computing.

After frustrations with windows / linux (laptop, dell xps 13), I ended up shelling out a bunch of money to get a macbook pro '16 with just the two ports: No touchbar. Had to have the top replaced within 6 months due to keyboard. The two ports heated up and pushing heavy throughput via wireless would make bluetooth go apeshit. Sold it immediately afterwards, endured a few more frustrating months with linux. Got a '17 pro, surely they had fixed things. Keys started sticking. Replacement, sold it. Now on a '19 macbook air, because surely they've fixed it, yes?

Keys are sticking after less than ~10 uses.

My desktop has been linux powered since that soul search started, without issue. The trackpad on my lenovo x5c is working wonderfully under ubuntu after I realized I had been importing broken configs from my 10+ year old backups: Exclude your dotfiles.

I only miss two things from the entire ecosystem: ulysses (the text editor) & little snitch. Everything else about linux is far superior.

gnicholas(1467) about 18 hours ago [-]

> Now on a '19 macbook air, because surely they've fixed it, yes?

Is there a 2019 MBA? I thought it was most recently updated in 2018. It certainly feels better than the older models, and even a smidge better than the 2018 MBP, which was updated earlier in the year. But there are still reports of issues with the 2018 MBA, including in the WSJ writeup IIRC.

darrmit(10000) about 22 hours ago [-]

It's not just the keyboard, either. I've been buying Macs since they switched to Intel in 2006 and the most recent MacBook Pros are the most problematic I've ever had.

- Bluetooth seems to be fundamentally flawed at a hardware level. AirPods, Plantronics, doesn't matter. It just randomly connects and disconnects as it pleases.

- USB-C dongles are a minefield of poor functionality

However, I've run Linux off and on since 2007 and Windows for 20+ years and I still wouldn't give up macOS as my daily driver. Even with the issues above it still has an excellent display, I like the keyboard (waiting to start having issues), and the battery life is the best I've ever had.

I'm glad to hear desktop Linux works for some people, but I just don't have time to a.) tune and configure it to work and have decent battery life with whatever hardware I choose or b.) troubleshoot it when it randomly decides to break itself on update.

H1Supreme(10000) about 22 hours ago [-]

> tune and configure it to work

You really don't have to do that. For example: I put Ubuntu Mate on a 2009 Macbook that could not run MacOs anymore (too slow). I had to install a wifi driver, and that was it. Everything else worked out of the box.

My current Dell Inspiron 7000 laptop is running the same OS, and needed 0 drivers installed.

I put Antergos on an Intel Nuc (that I later switched to Ubuntu Server). Again, no configuration, and no issues with drivers.

I will mention that Mate has some font scaling problems if you're running two monitors with different resolutions (spoiler: you can't), but I know that other distro's handle that just fine.

lloeki(3857) about 22 hours ago [-]

> USB-C dongles are a minefield of poor functionality

Since most of my peripherals have a detachable cord, I just bought a couple of USB-C <-> whatever cables and now I'm fully 'native' USB-C. Suddenly the couple of USB-A host devices I have around feel incredibly legacy (an old MacBook, Xbox One).

> Bluetooth seems to be fundamentally flawed / USB-C dongles

I do have a USB-C-to-many-things hub but I never use it now, yet it should be noted that the thing is so badly shielded that it throws WiFi down the curb every single time I used to plug a device or even SD card into it, so it might affect bluetooth too. I have zero issues with the same devices but using USB-C cables.

Sir_Cmpwn(313) about 21 hours ago [-]

It sounds like you're spending as much time on finicky Bluetooth and USB-C as you might spend on battery tuning on Linux.

flowersjeff(10000) about 19 hours ago [-]

The whole selling point of Macs - the lack of need to 'tune'. Yet I even had to do this with my 2014 model - otherwise, the laptop would run hot to the touch. It is troubling to see the trend happening.

Kaze404(10000) about 17 hours ago [-]

> a.) tune and configure it to work and have decent battery life with whatever hardware I choose

Literally all you have to do is install powertop.

> b.) troubleshoot it when it randomly decides to break itself on update.

I've been running Linux desktop for almost 2 years now and I've never had that happen. Unlike when I used Windows, where a W10 update would be forced into my system and either change functionality or fundamentally break something.

lrem(10000) about 15 hours ago [-]

I'm on a 2013 retina and wouldn't swap for the new ones even if somebody paid me to. It has regressed in a bunch of ways and the only thing I would get for that is a slightly faster CPU. Which doesn't really matter. If I had a significant workload, I would not put it on a laptop anyways.

tomwilson(10000) about 21 hours ago [-]

I'm not sure it's the MBP thats the problem with the Bluetooth - My iMac at work has the mouse disconnect randomly at least a couple of times a week.

wazoox(3661) about 18 hours ago [-]

I've tuned and configured my WindowMaker desktop in 2003, and it didn't change much since then, through many, many upgrades of hardware and software, switch to 64 bits, etc. Oldies are goldies. If it ain't broken, don't fix it.

xfer(4118) about 20 hours ago [-]

> I'm glad to hear desktop Linux works for some people, but I just don't have time to a.) tune and configure it to work and have decent battery life with whatever hardware I choose

So you are comparing a finely tuned OS for a specific hardware to using linux with whatever hardware you throw at it? Why don't you just pick a specific hardware that works with linux well e.g. thinkpads?

seba_dos1(3592) about 22 hours ago [-]

> but I just don't have time to a.) tune and configure it to work and have decent battery life with whatever hardware I choose or b.) troubleshoot it when it randomly decides to break itself on update

From my experience, troubleshooting macOS when it randomly decides to break is almost a Windows-like experience - lots of frustration and no sensible help on the internet (unless your problem is so trivial it gets solved by something like NVRAM reset, because this is the best advice you can count on online). I absolutely prefer GNU/Linux where, if something breaks, it's not that hard to get it back to sensible state.

weberc2(4085) about 20 hours ago [-]

Also, the trackpads on Mac just work. I recently made a foray back into Windows a few years ago and was disappointed to learn that PC trackpads have made little progress in the intervening decade. Is it really easier to touchscreenify your entire software stack than to build a sane touchpad?

achow(4109) about 20 hours ago [-]

Flexgate

robertAngst(3920) about 22 hours ago [-]

>Even with the issues above it still has an excellent display, I like the keyboard (waiting to start having issues), and the battery life is the best I've ever had.

I don't understand this.

I understand not liking Linux Desktop(I hate it, but LOVE LOVE LOVE Ubuntu Server, fav OS of all time)

And... the screen is what has you sold on Apple OS?

I'm flustered at this logic. Pretty screen-> Good operating system

I only imagine you use your laptop as a facebook machine. Would a cellphone do?

bnt(4073) about 22 hours ago [-]

How about wifi? I am experiencing random wifi disconnects ever since the mid-2014 MBP all throughout the current-gen 12" MB. It's so infuriating that I had to disable wifi and Bluetooth altogether and switch to a dongle with an Ethernet connector. I changed routers and even my home (!!) to see if the issue persists (it does). And I'm not alone, forums have similar cases dating years prior to me.

paxys(10000) 1 day ago [-]

> "So, just curious, why did you buy another one?" she asked, referring to the 2018 MacBook Air I'd gotten six months after selling the Pro.

I was hoping she talked more about this part, but brushing it aside by bringing up 'Apple ecosystem' is pretty unsatisfactory. It does tell you, though, that Apple has its target demographic by the balls, and broken keyboards are certainly not chasing them away.

Macbooks will get fixed the day tech journalists start writing articles about how they tried a Windows laptop and liked it.

sundvor(3885) 1 day ago [-]

My laptop of choice (X1 Carbon) consistently gets a pretty good rep for its keyboard. I'm still using my ~6.5 year old first generation (top spec) and the keyboard remains flawless.

But then again they actually test for foreign particle ingression:

https://solutions.lenovo.com/resource-center/pc-solutions/pa...

Mindwipe(10000) about 24 hours ago [-]

TBF, I still find that I just prefer using MacOS to Windows. Windows has definitely improved, but my Mac just doesn't slow to a crawl while something updates or indexes in the background, which happens to me in Windows still. All the time.

I think that's a perfectly legitimate answer. It's just a shame that the available hardware to do so is currently terrible.

robertAngst(3920) about 22 hours ago [-]

My last company gave us iPhones, and I was excited to try an Apple product without giving Apple money.

Wow, I felt like I was using a phone that was 5 years old.

I only imagine Apple users have Allegory of the Cave effect, they don't know whats out there. Leaving the cave is lots of work, so they enjoy whats available for them.

marmaduke(4104) 1 day ago [-]

I do IT orders in a research institute, a MBP user wants a big memory machine to do server side data analysis, and wants to order the iMac pro 128 GB ram. He's shocked to learn that for the same price he can get a Dell Epyc server with 512 GB ram. Guess what he still wants? The iMac, for the "ecosystem".

Cthulhu_(10000) about 23 hours ago [-]

> Macbooks will get fixed the day tech journalists start writing articles about how they tried a Windows laptop and liked it.

This happened after the touch bar MBP was released; a lot of people made the switch then. Did that count for nothing in the end?

Nextgrid(10000) about 23 hours ago [-]

> they tried a Windows laptop and liked it

This requires Windows to become likeable.

I (and plenty of other people) will like Windows once it:

* has a consistent UI instead of 3 control panels with icons and UI paradigms ranging from Windows 95 to Windows 10.

* ships with quality applications that don't try to eat the entire screen to display 2 lines of text (pretty much all 'modern' built-in apps are a disaster in that regard)

* has an App Store with decent, curated apps. Now I'm not sure if stores are the future on desktops, but Apple at least seems to be able to keep the crap at bay and actually have some decent productivity apps in it. Windows Store? Oh yeah, knockoff Flash Players and similar scams, and near zero apps you'd actually want to pay for.

* doesn't come with ads nor invasive telemetry that sometimes re-enables itself after updates (don't mention the Enterprise edition which you can only get in volume, so no solution for freelancers & small businesses)

* doesn't force a stupid phone-style lockscreen on non-touchscreen machines (I'm sure you can disable it with a Group Policy or registry tweak - my point being, I shouldn't have to spend hours doing that - macOS comes with reasonably sane defaults in comparison)

* has a start-menu search that actually works and doesn't surface irrelevant crap from their failed attempt at a search engine.

* has a calculator that doesn't take 10 seconds to load and then asks me to 'rate' it (seriously? can't believe I'm saying this)

* proper QA - since Windows 10 the quality has gone down the drain and it feels more like a half-assed Linux distribution except that it has ads and you still pay for it

MacBooks definitely have their flaws, but overall it's still worth it and I write off their price to the cost of doing business (even if I had to buy a new MacBook every few months it would still be worth it for me). Microsoft (and their OEMs) can have my business once they put out a decent OS that actually feels polished and works for me, not against me. They managed to do it with Windows 7, not reason they couldn't do it again.

megablast(3662) about 21 hours ago [-]

Because Windows is absolute shit, and has been since they released Vista. I have tried going back, have to use one at work occasionally. Damn awful.

Spooky23(3552) about 23 hours ago [-]

That's going to happen.

It's hard to weigh the Mac ecosystem heavily when it's obvious that it is a dead man walking platform. How Apple handles the iTunes transition will be the turning point.

Knock on Windows, but it is a living platform that Microsoft is investing in. We all laughed at Windows 8 and Surface, but if you're still laughing now, it's because you haven't looked.

rasz(10000) about 23 hours ago [-]

>I am stupid for buying another one of these computers

coldtea(1255) about 23 hours ago [-]

>I was hoping she talked more about this part, but brushing it aside by bringing up 'Apple ecosystem' is pretty unsatisfactory. It does tell you, though, that Apple has its target demographic by the balls, and broken keyboards are certainly not chasing them away.

It's more like, I'd rather pay another sum (assuming I have it), than have to suffer the experience of Windows or desktop Linux (and I've used all three for decades -- in fact professionally, it's all Linux work).

But of course, I'd rather didn't have to pay for a broken keyboard fix, or suffer the broken keyboard. But the experience on the other side is so much worse to me, that it still trumps that (possibly because I can afford it, although just barely -- I've resorted to using external keyboards with my 2017 MBPr).

The 'ecosystem' (e.g. I'll lose my paid apps, or such, is a non-factor for me).

dalbasal(10000) about 24 hours ago [-]

There just isn't much choice in the market. It's been windows or apple for a long time.

Windows and the laptops that come with it has gotten a lot better since the (imo) vista low point that got a lot of current mac users started (or back). But, there are still aren't many choices.

I was hoping android (or even iOS) would lead to some interesting new laptop choices but if you're in the market for a pc, it's still a pick-1-of-2 choice.

gotrythis(4060) about 14 hours ago [-]

After using Windows since the beginning, I switched to Mac and leased a top-of-the-line MacBook and iPad, lured in by their great build quality and Mac-only software.

Neither reliably works for input!

The Mac has the keyboard issue, and the iPad has the less well known issue with randomly not being able to recognize fingers. Neither is fixable.

When the lease is up this fall, I will own them. The Mac will become a desktop testing machine, as it's useless as a laptop. The iPad works with pen, so I'll still use it for some purposes. But I wish I could sell them!

I can't though, because unlike Apple, I could not ethically sell these to some other sucker. Such a waste of money. I've lost all respect for Apple and will avoid them at all costs from now on.

Getting the next MS Surface Book for my next laptop.

rocky1138(655) about 14 hours ago [-]

If you're concerned about ethics when you go to sell, you can confirm with the buyer that they have owned one previously and know what they are getting into. That way, you're in the clear.

bnolsen(10000) about 21 hours ago [-]

Lack of competition in the Apple ecosystem, plain and simple. I think people know this before making the jump. Or they work for a company that forces them into choosing between a windows or osx laptop. I'm in this crowd and I have no love for either. I despise apple hardware but hate windows itself far more.

Most people are best served with a good chromebook. That crowd won't be the ones frequenting this type website.

TuringNYC(3620) about 20 hours ago [-]

How much of an ecosystem is there between Apple on the desktop vs mobile?

I have an iPhone, iPad, Apple Watch, and AirPods and there is definitely a strong ecosystem there.

However, I ditched the MacBook (due to keyboard issues) and went with a ThinkPad and have been very happy. The ThinkPad has iTunes, so I dont lose my music. I supposed there is iCloud integration, but that seems like a small reason to remain on Apple desktops.

Side note: Other benefits of ThinkPad

- 64GB RAM! Can run multiple VMs easily - NVIDIA GPU for those who do CUDA work.

RandomBacon(10000) about 20 hours ago [-]

You're spot on, I'm sorry that you're being downvoted.

Apple Inc has a monopoly on regular users' hardware for their iOS/OS.

Windows, Linux, Android, Windows Phone OS (when that was a thing), did not. Companies were able to try and provide the best hardware in order to win customers.

Apple doesn't have provide the best hardware, regular users only have one hardware provider to choose from if they want Apple's iOS/OS.

(The average regular user is not going to make a Hackintosh.)

wyclif(174) 1 day ago [-]

P.T. Barnum (possibly) said it best: 'There's a sucker born every minute.' There are a lot of suckers out there with MacBooks.

rarrrrr(10000) 1 day ago [-]

After watching Louis Rossmann's live repairs and informed bitching, I bought an almost maxed-out Lenovo T480 on the President's Day sale and hackintoshed it. Water-resistant, backlit keyboard that's awesome and isn't crazy loud/fragile. 16 GiB, WQHD (could be brighter), Samsung 1 TiB 970 Pro and giant extended battery. 9 hour run-time. MIL-spec rated.

EDIT: I had a pre-Retina A1278 mid-2012 MBP with 16 GiB and 2 SSD's for a long time. MagSafe 1, okay keyboard, generally bulletproof for a while until the external ports and logic board traces started corroding from humidity and temperature extremes. :( I bought a broken screen MBP for $170 USD to get a working logic board cheap to recover and migrate off. Donated to a sane-but-poor itinerant writer/journalist/commercial fisherman.

makecheck(3877) about 16 hours ago [-]

Apple seems to focus more on the "can cost a lot" definition of "Pro", and not the "get important work done" definition of "Pro". The only correct course when offered a more-expensive and clearly-regressing machine is to not buy it (though I suspect Apple might actually have come out making more money this time around).

They need to also realize that if anyone is likely to accept reasonable trade-offs (like thickness) in favor of a better work machine, it's "pros"! It is insane to regress at this price/feature level.

hollander(4117) about 16 hours ago [-]

Apple has lost sight of it's original motto to be independent, creative and do the things that other people didn't dare to do or simply never thought of. This created a cult, often despised, but mostly admired, and being part of that cult was more important than having the best technical value for the money.

Now their purpose seems to maximize on making money off the brand value. That won't last. There is no vision anymore. Well, the 'vision' is thinner, thinner, thinner, and all the while make it twice as expensive. I suppose there is a real market for this, many people may want it, but I think there are many users who don't care about 1cm thicker laptops, or having the fasted custom made ssd-connector. They prefer lower price and swappable disks and batteries. Oh and working keyboards. I'm glad with my 2015 macbook with good keyboard, but I don't know what I would do if this thing crashed tomorrow...

sonnyblarney(3337) about 23 hours ago [-]

I think the butterfly keys are great, when I got used to them going back to anything else felt like a step back.

Reliability issues are a problem, but that's all within reason I think. If they repair for free.

The issue with Mac is they are too expensive and a lot of the regular, non-iPhone stuff is languishing.

robertAngst(3920) about 22 hours ago [-]

>Reliability issues are a problem, but that's all within reason I think.

No, when your product doesnt work, that is unacceptable.

coldtea(1255) about 23 hours ago [-]

15+ years as an Apple user, this is my biggest pet peeve.

The other two being:

- the touch strip, which I find mostly annoying. I'd prefer physical buttons with the ability to show different labels (a la Optimus keyboard).

- whichever idiot thought putting the power plug at the bottom of the Magic Mouse was a good idea.

jswizzy(10000) about 23 hours ago [-]

The other big one is when they removed the Injection-molded end caps, 'cable savers' from their cords.

matwood(10000) about 22 hours ago [-]

> whichever idiot thought putting the power plug at the bottom of the Magic Mouse was a good idea.

Do you have one? It charges very quickly. Basically go make a cup of coffee and it's charged for the day+ when you get back. Leave it plugged in overnight and it's good for almost a month.

lostmyoldone(10000) about 23 hours ago [-]

Dropping the magnetic power cord connector is a major issue for me, but otherwise I wholeheartedly agree.

tsmarsh(10000) about 23 hours ago [-]

You missed charging the Apple Pencil on the ipad pro.

Arrow keys being disabled in Safari iOS

3D Touch

Removing the 3.5mm Jack...

johnwalkr(10000) about 20 hours ago [-]

I read somewhere that it was a deliberate design decision to not make users think that while plugged in, it is a regular USB mouse. Indeed, I have a Logitech Bluetooth mouse with a micro USB port in its front. It turns out that this is for charging only, not for the mouse interface. This leads to some challenging troubleshooting when the battery dies or it otherwise doesn't work.

tonyedgecombe(3892) about 23 hours ago [-]

Once they decided to make the mouse chargeable it was inevitable the plug would go underneath. A plug that allowed you to continue to use the mouse wouldn't stand up to the stress and strain of being dragged around your desk.

gingericha(10000) about 20 hours ago [-]

I just wish the touch strip had haptic feedback. I don't mind it, but often times I'll find that I've accidentally rested my finger on the escape key and can't figure out why applications/menus aren't working properly.

alehul(3836) about 23 hours ago [-]

> whichever idiot thought putting the power plug at the bottom of the Magic Mouse was a good idea.

For those not familiar with the Apple Mouse, the commenter means that the power plug is on the part that you're supposed to have on the mousepad, meaning you can't charge it while using it, at all.

Easily the worst way to design a mouse, ever. I don't understand how it was approved.

threeseed(10000) about 22 hours ago [-]

People still bringing up the Magic Mouse.

The idea is you weren't supposed to leave it plugged in.

2 hours of charging = 2 months of use so it's not like charging was supposed to be a regular occurrence.

pps43(10000) about 22 hours ago [-]

Discontinuing MagSafe should also be on this list.

lcnmrn(3733) about 18 hours ago [-]

Actually it's the best keyboard I ever type on. I just loved. I have a 13-inch MacBook Pro Touch Bar at work. It's fast and reliable with every key press.

headsupftw(10000) about 18 hours ago [-]

I second this. You may hate the butterfly keyboard but I love it. So claiming the design has clearly failed is a little bit of a stretch.

kkarakk(10000) 1 day ago [-]

i am absolutely laughing at this macbook pro 2017 i bought. the keyboard has been replaced 3 times now...the screen was replaced for stagelight effect(broken connector)...they replaced the whole top bit for that(everything north of the hinge gets replaced)

i finally pawned it off on someone else and they promptly fried the motherboard due to water damage(the air intake ports lead directly to the motherboard so good luck to any starbucks warriors out there) so that got replaced too.

Ship of theseus in action - a macbook story. only the bottom baseplate is from the original macbook at this point.

sundvor(3885) 1 day ago [-]

Woa. For the X1 Carbon, Lenovo actually designed ducts to channel accidental water ingress away from the important parts.

It's kind of sad that Apple gets away with just focusing on the glitzy externals - and that their plethora customers don't care.

threeseed(10000) about 23 hours ago [-]

The air intake ports on the MacBook Pro are on the sides/back:

https://support.apple.com/en-us/HT202179

Must have had the unit deeply submerged in water to get it in via those ports. Be interesting to know how that would've happened.

jvatic(10000) about 22 hours ago [-]

My main complaint with their new keyboard design, not having used it enough to encounter any reliability issues, is that it's just way too cramped to actually type on. I've since gone back to an older MBP, but while I was using the newer design I found the only way I could actually type on it was using an external keyboard. There really wasn't anything wrong with the 2012/2015 form factor and I wish they'd go back to that.

hliyan(1827) about 21 hours ago [-]

I too had to go back to an older MBP, but not because the keyboard was cramped. My complaints:

1. Not enough tactile feedback from keys. I have to carefully calibrate the amount of muscle power I use to hit the keys, which paradoxically increases the pain in the joints of my fingers.

2. Lack of a physical ESC key. I touch type, and this makes writing code rather difficult.

3. Lack of USB port. Lack of HDMI port. Lack of magnetic charging port (which is very convenient)

4. Oversized touchpad. Increases accidental touches.

My old MBP is now in its third year, and I'm really hoping a newer, more usable version will come out before it dies.

fumar(2791) about 20 hours ago [-]

I just came back to a MacBook Pro 2018 after several years of experimenting with Windows machines. Based on this thread I am outlier – I enjoy using the trackpad, touch strip, and the keyboard seems fine.

In the past five years I have used the following; Thinkpad x230, Thinkpad X1 carbon, Thinkpad x370 yoga, Thinkpad X1 Yoga, Surface Pro 3,4,5, Surface Laptop, Surface Book 1,2, and a hackintosh Thinkpad X230. I was a big fan of the ThinkPads because their keyboards tend to have solid travel and they have trackpoint. Having the ability to type and move the cursor without moving your hands off home row is great. The Surface lineup has built in touch and pen capabilities that does come in handy for note taking in meetings. As does the Thinkpad Yoga line. For a year or so, I thought the ThinkPad Yoga line was a good compromise. But, then I started to have problems with random reboots, PWM screen, bluetooth issues, random CPU usage. I chalked that up to part Windows. I bought a refurbished top-spec x230 and hackintoshed that. It was a good middle ground for 'user upgradable hardware' + MacOS. I kept that thing for 1.5 years and towards the end I used it as my primary device. That was the start of me creeping back to MacOS. edit: Newer Thinkpad models made battery replacement harder by removing the swappable second battery option. I carried the x230 everywhere with an extra battery that was awesome, but I still didn't get great battery life. I look back and find it funny that I carried up to two batteries around for that thing and probably had less battery life than the new MacBooks.

I prefer to have something that just works. I bought the MacBook 2018 upon release and its been a workhorse with zero issues thus far. Day to day, I work in adtech and dabble in python on the side. I can't say that for any of the machines above. I even hook this up to en eGPU and play Dirt Rally on high settings. I miss the TrackPoint and touch screen of some other models but I am doing just fine. Instead, I use an iPad Pro with pencil. That is a far better tablet for writing than the Surface line due to the superior app ecosystem on iOS (see Paper, procreate, Pixelmator Photo, Affinity design and photo, etc).

nrjames(4103) about 19 hours ago [-]

I have a 2016 MBP and love the keyboard. I've never had any problems with it. That's not to dismiss people who have experienced quality issues. I don't mind the touch bar, but it's not particularly useful.

kcommam(10000) about 18 hours ago [-]

I feel the exact same way and have always recommended Macs to friends, family and cwoorkers as the computere that 'just works.' I'm been suspicious of people who disagree with that sentiment — to each their own, but I don't think there's much dispuute that Macs just work.

Yet, now, heere I am with a 2018 MacBooko Pro that repeats keys randomly — somoetimes out of theorder they were pushed— andstarting to wonder if I need to change my recommendation. The laptop has been 'fixed' twice now and always works just fine for a day or two before reevertinig back to the same old state. I clean the keyboard daily with compressed air, thouugh that's largely become a symbolic gesture that doesn't fix the issue but at least makes me feel like I'm not crazy.

All that said, much like the author of the article, I'm so bought in to the Apple ecosystem at this point that I've resorted to a certain kind of stooicism — thiis is apparently the set of circumstances I have been giiven and I must live with it.

(Note: no, I'm not too lazy to correct typos noor a partiicluarly poor typer — this post is the unedited resulut of typing on my broken keyboard. Just be happy double spaces don't show up by default in DOM elements.)

MrScruff(10000) about 17 hours ago [-]

Also totally happy with my 2016 MBP, keyboard and all.

ajford(10000) about 16 hours ago [-]

The T series still has the removable batteries, and the newer model years (T460 and up) have weights equivalent to the older X240 and below.

My T460 is within 0.5lbs (~0.25kg) of my work-issued 2015 MBP 13in, and from my quick searches, that seems to be generally consistent up to the modern T490 & 2018 MBP 13s.

I don't have a modern Thinkpad or MBP to compare, but my T460 still gets ~4-5hrs of normal usage, or 2-3hrs of heavy usage (strategy gaming like FTL or Rimworld, running VMs and python dev work, etc). My 2015 MBP 13 gets fairly similar times.

fjp(10000) about 19 hours ago [-]

I'm in the same boat as you. I wish my company had issued me the model without the touchbar, as I very much would prefer physical function and esc keys but other than that I love the machine and have never had issues.

The one thing I would change is that I seem to have to reach way over to the right side of the trackpad to get a 'right-click'.

I also have a ~2016 macbook air and the keyboard SUCKS in comparison to the butterfly keyboard.

cnf(10000) about 13 hours ago [-]

I very much like my MBP keyboard and touchbar as well. And I know plenty of people that share the sentiment.

csomar(912) about 11 hours ago [-]

I do share your sentiment. I live in a country with lots of 'dust'. I had concerns after reading people reviews but went ahead and bought the latest 2018 version.

Two things:

1- I really enjoy the new Macbook Pro keyboard. Way more than the 2014 one.

2- It is really dusty and I need to clean every week. Partly, because I work on cafes a lot.

So far (6 months), it has been going strong. No issues (the trackpad can be annoying at times, but that's it).

d35007(10000) about 19 hours ago [-]

I bought the 2018 MacBook Pro shortly after its release and I'm pleased with my purchase. I don't really like the keyboard, but I spend the vast majority of my time at a desk with an external keyboard and mouse anyway. My laptop is basically a desktop that I lug to and from work everyday. USB-C has been great for me because it means that I can have 1 hub that handles power, my external monitor, network, and all of my other peripherals. I tell people it's like having a docking station that works 100% of the time.

I don't like the keyboard because I find that I accidentally repeat keystrokes on it a lot. Maybe I just need more practice or lighter fingers. I also have to admit that I don't like to eat around it for fear of spilling crumbs on the keyboard. I've always been pretty tidy around my laptops, but I've never been this worried. It's not a deal breaker for me (obviously), but I hope they fix the keyboard's issues in the next version.

dmitryminkovsky(4026) about 17 hours ago [-]

Naa you're not alone. I love the new MBP keyboard. My experience to suggests what we have here is a vocal minority. I type faster with the new style keyboard, I enjoy the springiness of the keys and their much larger size than previous iterations. I also like how their backlighting works compared to the old style. If people are having trouble with them, Apple should resolve those issues, but in my opinion the keyboards and new MBPs are excellent machines.





Historical Discussions: Facebook 'unintentionally uploaded' 1.5M people's email contacts without consent (April 18, 2019: 489 points)

(513) Facebook 'unintentionally uploaded' 1.5M people's email contacts without consent

513 points 2 days ago by starmftronajoll in 4101st position

www.businessinsider.com | Estimated reading time – 4 minutes | comments | anchor

Facebook harvested the email contacts of 1.5 million users without their knowledge or consent when they opened their accounts.

Since May 2016, the social-networking company has collected the contact lists of 1.5 million users new to the social network, Business Insider can reveal. The Silicon Valley company said the contact data was 'unintentionally uploaded to Facebook,' and it is now deleting them.

The revelation comes after pseudononymous security researcher e-sushi noticed that Facebook was asking some users to enter their email passwords when they signed up for new accounts to verify their identities, a move widely condemned by security experts. Business Insider then discovered that if you entered your email password, a message popped up saying it was 'importing' your contacts without asking for permission first.

At the time, it wasn't clear what was happening — but on Wednesday, Facebook disclosed to Business Insider that 1.5 million people's contacts were collected this way and fed into Facebook's systems, where they were used to improve Facebook's ad targeting, build Facebook's web of social connections, and recommend friends to add.

A Facebook spokesperson said before May 2016, it offered an option to verify a user's account using their email password and voluntarily upload their contacts at the same time. However, they said, the company changed the feature, and the text informing users that their contacts would be uploaded was deleted — but the underlying functionality was not.

Facebook didn't access the content of users' emails, the spokesperson added. But users' contacts can still be highly sensitive data — revealing who people are communicating with and connect to.

While 1.5 million people's contact books were directly harvested by Facebook, the total number of people whose contact information was improperly obtained by Facebook may well be in the dozens or even hundreds of millions, as people sometimes have hundreds of contacts stored on their email accounts. The spokesperson could not provide a figure for the total number of contacts obtained this way.

Users weren't given any warning before their contact data was grabbed

The screenshot below shows the password entry page users saw upon sign up. After they entered their password and clicked the blue 'connect' button, Facebook would begin harvesting users' email contact data without asking for permission.

Screenshot/Business Insider

After clicking the blue 'connect' button, a dialog box (screenshot below) popped up saying 'importing contacts.' There was no way to opt out, cancel the process, or interrupt it midway through.

Business Insider discovered this was happening by signing up for Facebook with a fake account before Facebook discontinued the password verification feature.

Screenshot/Rob Price

From one crisis to another

The incident is the latest privacy misstep from the beleaguered technology giant, which has lurched from scandal to scandal over the past two years.

Since the Cambridge Analytica scandal in early 2018, when it emerged that the political firm had illicitly harvested tens of millions of Facebook users' data, the company's approach to handling users' data has come under intense scrutiny. More recently, in March 2019, t he company disclosed that it was inadvertently storing hundreds of millions of users' account passwords in plaintext, contrary to security best practices.

Facebook now plans to notify the 1.5 million users affected over the coming days and delete their contacts from the company's systems.

'Last month we stopped offering email password verification as an option for people verifying their account when signing up for Facebook for the first time. When we looked into the steps people were going through to verify their accounts we found that in some cases people's email contacts were also unintentionally uploaded to Facebook when they created their account,' the spokesperson said in a statement.

'We estimate that up to 1.5 million people's email contacts may have been uploaded. These contacts were not shared with anyone and we're deleting them. We've fixed the underlying issue and are notifying people whose contacts were imported. People can also review and manage the contacts they share with Facebook in their settings.'


Got a tip? Contact this reporter via encrypted messaging app Signal at +1 (650) 636-6268 using a non-work phone, email at [email protected], Telegram or WeChat at robaeprice, or Twitter DM at @robaeprice. (PR pitches by email only, please.) You can also contact Business Insider securely via SecureDrop.


Read more:




All Comments: [-] | anchor

smt88(4107) 2 days ago [-]

Saying 'unintentionally' here is like saying you unintentionally stole someone's TV when they gave you their key to walk their dog.

It takes extra work to upload those contacts, which means several managers and developers decided to do it and then spent time implementing it.

For the FB employees reading this: what is your tipping point? Would you say no to that assignment?

iamrobschiavone(4104) 2 days ago [-]

A common practice is to keep developers unaware of the real objective of their work (like Uber, in another comment on HN, https://news.ycombinator.com/item?id=13786384):

- developer A is tasked to create the prompt to ask for username and password of the email account

- developer B is tasked to call some API to upload contacts from email account

- developer C is tasked to bind two functionalities.

Now replace developers with teams and you see how simple is for the average developer to underestimate the scope and the ethical bounds of a given task.

johnchristopher(3654) 2 days ago [-]

> It takes extra work to upload those contacts, which means several managers and developers decided to do it and then spent time implementing it.

Not really. Facebook is a bunch of autonomous services (registration, access, tracking, activities, etc.) accessing shared databases (chat logs, activities, media uploads, etc.) with some kind of automatic implicit and explicit ACL in place. The suggestion/contact service got access to data provided through the email-not-working-with-oauth-so-let-us-use-automatic-token-delivery-and-confirmation-by-accessing-user-emails because it was told a new source of contacts were available for those users. So, not a straight path.

Accident/Blunder > Evil.

Now. GDPR ? GDPR. And because of GDPR those things aren't supposed to happen in Europe.

pixl97(10000) 2 days ago [-]

>what is your tipping point? Would you say no to that assignment?

When FB stops giving them a check.

At least that has been my experience watching programmers at other companies. Unless ethically bound by regulation and law, few people seem to have ethics.

codedokode(4109) 2 days ago [-]

FB would probably prefer a word 'lended' or 'took for a repair' instead of 'stole'.

saiya-jin(10000) 2 days ago [-]

Considering vast crowds of folks happily working for amoral places like investment banks (2008 crisis and its consequences) or wealth management (rich folks trying to keep as much money untaxed as possible and used for public spending), the moral bar for usual smart person is actually pretty low. Optimizing some ads seems pretty harmless when compared to.

As long as you don't see the evil being literally done ie in form or row of inmates being sent to gas chambers, there are almost endless ways to persuade yourself that all is actually OK and fine.

dwighttk(3016) 2 days ago [-]

I just tripped and a giant laser melted all the gold in Ft Knox which drained down a pipe into a storage container under my backyard.

lkbm(10000) 1 day ago [-]

> It takes extra work to upload those contacts

It also takes extra work to ask consent. You build it. You don't notice that your confirmation screen fails to trigger. You've just unintentionally uploaded a bunch of data without consent, when your intention was to do it with consent.

It's still pretty darn negligent, but it's easy to see how it could be done unintentionally.

jayawayjayyay(10000) 2 days ago [-]

From the article it sounds like there was a prompt for permission that got removed:

> Facebook told Gizmodo via email that in May 2016 it made a revision to the registration process, which originally asked the affected users for permission to upload contact lists. That change removed the opt-in prompt, though the company did not realize the underlying functionality was still operating in some cases.

It doesn't take a conspiracy to understand how a bug like that could happen.

tjpnz(10000) 2 days ago [-]

>For the FB employees reading this: what is your tipping point? Would you say no to that assignment?

I have an open ended question aimed mainly towards founders. Would you have any issues in hiring a candidate with Facebook on their resume?

maxxxxx(3988) 2 days ago [-]

"For the FB employees reading this: what is your tipping point? Would you say no to that assignment?"

There is a good chance that they didn't know how their work would eventually be used. That's the problem with big companies. Most people are far away from seeing the consequences of their work.

matt4077(1176) 2 days ago [-]

They did have the upload-your-address-book functionality before they instituted this check. I'm very much hoping to see Facebook suffer for this, but I could conceivably see a scenario where they reused code that did more than they wanted.

NoodleIncident(10000) 1 day ago [-]

In The Fine Article, it says that the feature was built on purpose, and previously asked for permission. The accident is that it wasn't completely removed.

blauditore(10000) 2 days ago [-]

This may be an unpopular opinion, but things like this happen. Someone gets the task to implement a login and either doesn't realize they should be using OAuth or is simply too lazy to do so. Next, someone has the idea to suggest friends, so let's grab some email contacts for that purpose.

That stuff happens all the time at small companies. While it's certainly bad practice, it's often not evil intent, but just lack of technical skills (for the former issue) and missing sense for potential privacy issues (for the latter).

In case of a large company like Facebook, one could expect they'd have processes and education in place to prevent such incidents, but I guess this happened a while back when FB was much smaller than it is now.

ascendantlogic(10000) 2 days ago [-]

> This may be an unpopular opinion, but things like this happen.

Yes, and at Facebook in the context of data gathering they seem to happen ALL THE TIME. So if they did actually care about privacy they'd make changes to curb these sort of 'mistakes', but taken in aggregate the relentless 'bugs' show a pattern of willful malevolence.

flokie(10000) 2 days ago [-]

'Next, someone has the idea to suggest friends, so let's grab some email contacts for that purpose.'

You're joking right?

Balgair(2928) 2 days ago [-]

I'd buy that excuse back in 2008.

But it's been over a decade of these types of reports about FB and their behavior. FB should be asymptoting towards good ethical standards and software practices. These reports should be getting more and more rare.

Instead, they seem to be growing exponentially away from good ethics and practices [0]. It feels like it's getting worse, faster, not less worse and slower.

Here's a partial list : https://en.wikipedia.org/wiki/Criticism_of_Facebook

[0] Yes, I'm being a bit hyperbolic with the graph analogies.

ajuc(3770) 2 days ago [-]

They broke the law. They should pay. 'Accidentally' is irrelevant here, even if you believe them.

jammygit(10000) 2 days ago [-]

First they ask for email passwords. Then the new users assume Facebook won't comprehensively mine their emails. Then Facebook awkwardly gets caught uploading 1.5 million users' email contacts.

It doesn't make sense for people to trust the service at all unless you assume one of two things:

1 - Despite all the outrage on hackernews, and the NWT stories, our neighbours down the street and family members still don't know how Facebook works or what is done with their data

2 - They don't care about their data privacy. I've heard this claim many times, but the people saying it often change their minds when they read more news stories. I really do think people have trouble assuming the worst about the intentions of others and are inclined to be trusting.

edit: clarification

kerng(3393) 2 days ago [-]

I think the backlash is mostly just delayed. At one point revenue will take a hit because engineers might refuse to implement these 'unintentional' and 'accidental' features on time.

There is no doubt that the public image about FB is significantly changing - a year from now things will not look better for Facebook then they are today, most likely worse I'd say. This is not something they can turn around anymore - the leadership is not making any learnings and repeats the same mistakes over and over again.

Fnoord(3868) 2 days ago [-]

Group #2 somehow lacks the imagination to see what could go wrong. They will learn when a cause effect of Facebook usage is put in their face. I guess the recent news does not push it in their face enough.

Its like that with skimming, lock picking, server security, infrastructure security, basically everything security related.

mnm1(3666) 2 days ago [-]

Don't attribute to stupidity what can be attributed to malice. No, I didn't get that backwards.

p1esk(2765) 2 days ago [-]

3 - those who know how fb works, assume the worst about its intentions, and still don't care and keep using it

nvssj(10000) 2 days ago [-]

>They don't care about their data privacy. I've heard this claim many times, but the people saying it often change their minds when they read more news stories.

'People don't care about a problem initially, then when it becomes graver they start to care'

So normal, expected behaviour?

darkpuma(10000) 2 days ago [-]

> 'I really do think people have trouble assuming the worst about the intentions of others and are inclined to be trusting.'

I think you hit the nail on the head. Even on HN, it's not uncommon to see a few comments on each negative story about facebook accusing the media of a conspiracy against Facebook; claiming that the media is wrongly maligning Facebook who is merely the unfortunate victim of a series of coincidental accidents.

They have trouble accepting that a tech corporation like facebook actually might be rotten.

soulofmischief(10000) 1 day ago [-]

I try to be an advocate for privacy. I really do. But everyone just calls me paranoid, asks why I need to be worried about my government like I have something to hide, or just stares blankly at me because they can't be bothered to actually think about the words climbing through their ears.

I'm going mental over the explosion of televisions in the last half decade which identify and report any content you watch on the TV by default, in exchange for 100-150 off the television (which was fluff to begin with... it's not a direct trade of $100 for your data).

I've set up about a dozen of these now for people and they just stare blankly while I try to explain what 'Auto Content Recognition' means... Hello 1984.

lbotos(3979) 2 days ago [-]

Related:

WhatsApp on iOS recently updated, and now will only show phone numbers for contacts UNLESS I upload my contacts.

In the UI if I click on a number it will take me to the profile where I can see that users name ~Tom, but wow, waddamove... Have we reached the point where FB can't make any more money until they go deeper or is this just drag-net 'data is the new oil'

dingaling(3980) 2 days ago [-]

It's the same on Android, with Contacts permission blocked it will show only numbers except for groups.

Furthermore it won't let you start a chat with anyone unless it can access your contacts to find them. However there's a great little app on F-Droid called 'Open in Whatsapp' that lets you start a chat with any arbitrary phone number.

qwertox(10000) 1 day ago [-]

By now we all know how Mark Zuckerberg rolls.

'Dumb fucks' wasn't just an episode, that's his character.

He'd probably be a good friend of Martin Shkreli if he wouldn't care that much about what others think of him.

OrgNet(4010) 1 day ago [-]

I'm glad that Zucky's comment finally came up but I'm surprised that it took this long..

Rafuino(3582) 2 days ago [-]

So, when is the FTC going to actually bring down the hammer on FB for violating the consent agreement? There's no way this was 'unintentional.'

At $40,000 per user per day [1], even at just one day of violation, that's a $60 billion fine FB should be liable for. 'Under the settlement, Facebook agreed to get consent from users before sharing their data with third parties,' so this seems to be EXACTLY in violation of that agreement.

[1] https://www.cnet.com/news/facebooks-ftc-consent-decree-deal-...

*Edit: on second thought, it should be even higher, as each of the 1.5M users had multiple contacts uploaded. So, for example, let's say 1 user had 150 contacts who were not part of the other 1.5M users who had contacts uploaded. That alone should be a violation of the consent rights of those 150 people, so $6 million per day. If every one of the 1.5 million people had, on average, 150 contacts exclusive of the other 1.5 million people who had contact info uploaded, that's a $9 trillion liability for one day of violation.

The FTC has been toothless on this for quite some time now, so I'm expecting no significant action as FB lawyers will defend that no one had data shared with 'third parties,' technically. Well, shouldn't my contact info shared by a friend with FB be a consent violation as FB is a 'third party' from my perspective?

will_brown(1508) 1 day ago [-]

Also Let's see a list of the various FTC settlements with FB. And a list of FTC employees who worked on those settlements now working for big tech.

I know one FTC employee who worked on the 2011 FTC/FB settlement (which required FB to obtain independent 3rd party audits certifying their privacy program for 20 years...never mind the subsequent violations and settlements) is now "head of privacy" for a certain social networking company.

elmo2you(10000) 1 day ago [-]

Maybe I'm just ignorant, but I do not really see how this violates the FTC agreement, because it covers Facebook sharing user data (stored/tracked/gathered by Facebook) with third parties.

However, what Facebook did is far worse than violating that agreement. Facebook gained accessed to user data on third party systems, to which they should never have had access. They gained this (unauthorized) access (at best without clear consent) on a false pretense (disguising as security related requirement). Then they imported user data, with no relationship to their stated goal/requirement, into their platform.

Associative contact information is a highly valuable commodity to any company involved in marketing and social media. I've seen a lot of people argue how this could have been the result of a laps of oversight, but that sounds like arguing how a gem stone trader might have 'accidentally' stolen a large quantity of rough gem stones, while claiming to not have known their value. Even if theoretically possible, it's extremely unlikely that nobody within Facebook knew/realized the value of this data.

Either way, Facebook gained access to highly valuable assets. Even in the unlikely event of sincere lack of oversight, it would demonstrate a level of incompetence that warrants them to still be held criminally liable.

Moreover, Facebook might actually have outright violated the Computer Fraud and Abuse Act (CFAA), in particular the 'access in excess of authorization' part, but I'm not sure.

michaelmior(3756) 1 day ago [-]

I'm not sure what the law currently is, but it seems that intent shouldn't really matter that much here.

u801e(10000) 2 days ago [-]

Why are companies even asking users to provide passwords for unrelated services? For example, when I added an external account on Etrade, they gave me the option of same day verification of that account if I provided them my online banking account credentials.

This practice opens up a significant potential for abuse and should be illegal.

matt4077(1176) 2 days ago [-]

Is this question rhetorical?

Your online banking is known to be verified, therefore another company can piggyback on that verification.

carnagii(10000) 2 days ago [-]

18 USC 1030 (a)(4)

(4) knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value

https://www.law.cornell.edu/uscode/text/18/1030

A criminal investigation into whether or not this was really accidental would be entirely warranted here. If there was intent to access this information without authorized access that is criminal.

squarefoot(3930) 2 days ago [-]

'obtaining anything of value' could be satisfied by getting personal data which today is akin to profit, but the 'intent to defraud' would be hard to prove in court, save for some very broad and dangerous intepretation of 'intent' which could equal sloppiness to malice, a precedent that might ruin the lives of honest people who just happen to be clueless sysadmins or developers. Totally agree though on investigating whether this was really accidental or not; if it was done on purpopse I would expect FB to be hit really hard.

matt4077(1176) 2 days ago [-]

Not a lawyer, but at least in my jurisdiction, fraud requires a monetary loss by the victim.

Generally, civil law is better suited for this sort of thing, no matter how good a pitchfork feels in your hand. As but one of the reasons, the required standard of proof is much lower.

levosmetalo(3059) 2 days ago [-]

> A criminal investigation into whether or not this was really accidental would be entirely warranted here. If there was intent to access this information without authorized access that is criminal.

I don't understand this. Claiming that something is an accident and not intentional usually isn't much of an excuse where it comes to the criminal acts.

howard941(223) 2 days ago [-]

I think the 'intent to defraud' (scienter) requirement's going to present a proof hurdle. Not necessarily insurmountable but it's still there.

mannykannot(4043) 1 day ago [-]

Simply asking for email passwords indicates an intent to gain unauthorized access, and disguising the request as being part of a security-enhancing action eliminates all doubt.

james246(10000) 2 days ago [-]

LinkedIn pulled something similar a few years back. At the time, I was using the same password for both my email and LinkedIn account, and found that people from my email address book were showing up as suggested connections. I can only assume 'consent' for this was buried in the T&Cs.

shereadsthenews(10000) 2 days ago [-]

Yeah people always forget this! LinkedIn is super-shady and what they were doing was the darkest of all patterns.

helloindia(10000) about 22 hours ago [-]

In case of Linkedin, they do ask for consent. But, even if you didn't allow it to export your contacts, other people may have allowed it, linking their linkedin profile with your email address. LinkedIn then shows them to you as suggested connections. This personally bothered me for a while then.

mikro2nd(2727) 2 days ago [-]

FB has said they'll be notifying the people whose contacts they 'unintentionally' uploaded. How about notifying those contacts whose private details they illicitly obtained that their privacy has been compromised by Facebook - the innocents who signed up for FB and had their contact-list stolen (let's call it what it is) may or may not feel any moral obligation (more likely, don't even see the issue) to notify their friends/family/plumber whose details they 'lost' to a thief.

hopler(10000) 2 days ago [-]

That's right. Whenever a computer system is breached, it is the breacher's responsibility to notification the affected people, not the entity entrusted with the information. That's why it's generally agreed that Equifax did nothing wrong when credit data was accessed.

fencepost(4117) 2 days ago [-]

How about notifying those contacts whose private details they illicitly obtained that their privacy has been compromised

Because there's a difference between 'we screwed up and obtained this' and 'we screwed up, obtained this, then used it. Hope our use didn't result in any problems for you.'

PedroBatista(10000) 2 days ago [-]

It's amazing what these companies can get away with without paying a single dime to anyone.

javagram(10000) 2 days ago [-]

This seems like a case similar to the Google WiFi data collection. Code written for one reason was reused in a different project without understanding what it would do.

Here's an example page from 2011 talking about facebook's old feature to import contacts via providing them your email username and password. This was at a point when many web mail services didn't offer an OAuth API to do this, so it did make some sense at the time. It was still safer to do a csv export and then import, but much easier for users to provide the password directly.

https://www.techwalla.com/articles/how-to-import-contacts-to...

> Type your email address and password for the Web-based email or instant-messaging service that you want to import into the dialog boxes and click 'Find Friends.'

matt4077(1176) 2 days ago [-]

I thought of this as well. One difference, at least subjectively, is that Google seems to make far fewer of such mistakes.

Just as with people, it's sometimes difficult to judge them for a single act. Only by aggregating behavior over time can we learn of their true character.

And Facebook's rotten.

rchaud(10000) 2 days ago [-]

FB's public comments about these remind me a lot of the '5 Standard Excuses' scene in the '80s BBC sitcom Yes Minister, where a civil servant lists the best CYA mea culpas for politicians to use when something goes wrong.

1. It occurred before certain important facts were known, and couldn't happen again

2. It was an unfortunate lapse by an individual, which has now been dealt with under internal disciplinary procedures.

3. There is a perfectly satisfactory explanation for everything, but security forbids its disclosure.

4. It has only gone wrong because of heavy cuts in staff and budget which have stretched supervisory resources beyond their limits.

5. it was a worthwhile experiment, now abandoned, but not before it had provided much valuable data and considerable employment.

cyphar(3630) 2 days ago [-]

For those who haven't seen the clip, [1]. Yes Minister is a brilliant piece of satire (though it does have a somewhat unfortunate Thatcher-esque streak when it comes to discussion of unions -- though it would've been difficult to avoid ridiculing unions in satire from the 1980s).

[1]: https://www.youtube.com/watch?v=6Y4PEqvk0Jg

3xblah(10000) 2 days ago [-]

Can someone use a throwaway e-mail address to sign up for Facebook?

Once the e-mail address is validated, is there any further need for a valid e-mail address to continue using FB?

Historical fact: Going back to the days when a university address was required, if the user created her Facebook account while at university and her e-mail address later expired when she graduated, FB did not disable the account.

Unless one wants to get notifications and other FB crud via email, AFAIK there is no need for a working e-mail address to use FB.

LinuxBender(346) 2 days ago [-]

Yes. I use throw-away email addresses for everything. When a company gets popped or 'accidentally' leak my email address, I simply add a header check and reject or discard them. I was on FB for 2 weeks when it started and I still see them in my logs from time to time trying to fish me back into the system.

nvssj(10000) 2 days ago [-]

Just use a throwaway email account AND keep it? At some point they might decide to lock you out if you log in from a different place, I think it's better if you keep the email account safe.

OrgNet(4010) 2 days ago [-]

I created a temporary email on my domain to use for the Facebook account creation and then disabled it so I stop getting spammed.. I can always re-enable it if I ever need to.

AmIDev(10000) 1 day ago [-]

In my experience, Nope.

I wanted to create a FB account while giving as minimal data as possible. While it's possible to create an account using temporary emails / temporary phone numbers, FB eventually asks you to submit more details.

This includes clicking verification links, uploading your photo, providing phone numbers etc.

Even when I managed to do all these (using fake data), my accounts got disabled in few days.

PS: when I used my email id associated with my FB account deleted back in ~2012, I found out it wasn't deleted. FB asked me to recognize pictures of my friends. So I believe no detail that ever passes the event horizon of facebook can ever leave it.

blibble(4005) 2 days ago [-]

unintentional my foot

the code to implement that functionality didn't come from nowhere

brianpgordon(3969) 2 days ago [-]

Apparently Facebook is claiming that the functionality came from a separate 'import contacts' feature that used to exist. But I agree; the idea that the import logic could have slipped into the login process accidentally is ludicrous. Or at least it indicates an outrageous lack of care on Facebook's part.

1024core(4110) 2 days ago [-]

The only way FB will change its ways is if (a) good engineers stop joining them, and (b) good engineers at FB start leaving. This will threaten their entire growth prospectives and finally bring about change.

I was having discussions with FB recruiter and some of their senior managers. I just informed them that I won't be pursuing that anymore.

FB engineers who are on HN: why are you still there? You can make similar money at several other companies without sacrificing your soul!

fbthrow_xyzzy(10000) 2 days ago [-]

Please don't downvote me into being the same color as the page background. I'm giving a serious answer to a question that was posed.

This has been asked before on HN. The genuine answer is some combination of:

* criticisms of FB are wildly exaggerated. This takes many forms, but in this particular case I think it's the issue of attributing to malice what's best explained by incompetence. Somebody probably just reused some old email importing code without understanding it thoroughly. If you know anything about how FB works, that's infinitely more plausible than some shady conspiracy to unethically harvest the contacts of a small percentage of users for a slight improvement in ranking or targeting.

Facebook is not some well-oiled machine, it is a jumbled mess of thousands of junior engineers, perpetually barely avoiding collapsing under its own weight.

* People inside FB generally believe, whatever they think of Zuck, that he doesn't just outright lie about verifiable facts. The entire code repository is completely open to all employees. If adding this feature really was malicious and FB's response is an outright lie, somebody WILL find the commit and leak it.

* Even if FB is doing harm, on balance the good it's doing is greater. It has made communication between humans easier and lower-friction which has many upsides.

Part of this is that all the upsides are concrete and obvious (people fall in love on Facebook/IG/MN/WA, they stay in touch with friends and family, they run a business, etc). Whereas the downsides are abstract and hypothetical (maybe someday someone will use Facebook's collected data for some nefarious purpose).

* Even if all of the above is false and FB really is harmful to the world, the situation certainly won't be improved by thinking people quitting, and leaving the company totally in the hands of yes-men who drink all the kool-aid.

JustSomeNobody(3879) 2 days ago [-]

I would assume that FB has gotten pretty good at hiring devs that match their culture. FB devs aren't reading HN articles about how bad FB is. FB devs are at FB because of the 'prestige' of having been selected from the 10's of thousands of candidates. They're there for the money. They're there because they enjoy the projects. They're not there because they have some moral obligation to change FB.

ahoy(10000) 2 days ago [-]

> You can make similar money at several other companies without sacrificing your soul!

I'm not so sure. Certainly Google, Amazon, et al are just as bad as facebook.

ummonk(4069) 1 day ago [-]

>You can make similar money at several other companies without sacrificing your soul!

Google collects significantly more data than Facebook, and has a sordid past with sexual harassment and inappropriate relationships. Lyft, Uber, and AirBnB have openly flouted regulations, and that doesn't count Uber's other scandals. LinkedIn grew by emailing everyone's contacts without their permission (if you think what FB did here is bad, LI was far worse). High frequency trading and other fintech companies engage in front-running and derivatives trading that may be contributing to market volatility and systemic risk.

Comparably paying companies pretty much all have questionable histories.

Meanwhile, Mark Zuckerberg has committed to investing in improving Facebook even at great expense (don't believe me? look up what triggered the nosedive in Facebook's stock last summer). Do you think Facebook will improve more if conscientious engineers left the company?

save_ferris(3832) 2 days ago [-]

Regulation is much more realistic, IMO.

The tech industry worships money and those who make it, and there are plenty of engineers who'd take the FB compensation package in a heartbeat, regardless of FB's public image problem.

This idea that the public will act together morally to stop corporate malfeasance while sacrificing their good fortunes isn't that realistic. Look at the FB shareholder situation. Lots of shareholders are angry at Zuck but can't do anything about it. None of them seem particularly interested in selling their shares because they don't want to have to pay for his bad behavior.

gyaniv(4117) 2 days ago [-]

Can't someone file a class action lawsuit against Facebook?

I mean, it's nice that they are deleting the information now, but they clearly did something wrong, and by basic standards, they should be punished. And the deleting the stolen information isn't punishment, and since they probably won't delete any new ad targeting information they gathered as a conclusion from the contacts, they are still profiting from it, so the punishment should be more then just a small fine (that I hope they get).

I'm just sick of them (and other companies) 'accidentally' doing something wrong, and barely get a slap on the wrist.

faxi(10000) 2 days ago [-]

There already is a $78B class action lawsuit against Facebook over the Cambridge Analytica scandal. $1000 per American whose information was harvested. It's hard to google for however.

yakubin(10000) 2 days ago [-]

Why would anyone just give a site their password to their email account? And to Facebook on top of that?

smt88(4107) 2 days ago [-]

Most people don't understand OAuth, so they don't know the difference between OAuth and giving out their password. Most people don't know they're doing this with bank scrapers like Mint!

I myself have had trouble figuring out whether certain dialogs were OAuth dialogs or just skimming my password, and I've been in web software for 20 years. A layperson has no chance.

xnyan(10000) 2 days ago [-]

This is not at all abnormal behavior. I'm kind of amazed consistently by the lack of awareness of the HN crowd of the habits of most users. Most people do not think about what they do on a computer even a fraction as often as a developer or other user here would.

My mother, for example, does not really understand that websites are run by individual entities. There's one 'internet' and all websites are kind of like a strip mall under general management, so in her mind if one page on facebook askes for a password to read my email, how is that any different than reading my email on on the yahooo page. All she knows is Facebook, an 'official' website asked for a password.

Cthulhu_(10000) 2 days ago [-]

Convenience. That is, Facebook - and others, like Skype - tells new users that the easiest and quickest way to find your friends is to send them your contacts so they can cross-reference the users.

And that, including me not paying attention, is how all my e-mail contacts got an email from facebook where I invited them to FB. That wasn't the intent!

username223(3706) 2 days ago [-]

It's a pretty easy mistake to make when you're new to the web, or simply don't care all that much how it works. I made the mistake of giving someone my contacts once when I was new to this stuff, and had many apology emails to send when my friends were spammed as a result. It was a harsh lesson in the web's fundamental hostility.

zach43(4108) 2 days ago [-]

People who aren't very tech/privacy-savvy, like elderly people, or kids/teenagers.

I remember signing up for facebook when I was in high school, and I probably would've provided my email password if facebook asked for it...as an adult now I wouldn't provide my email password to anyone, of course.

galfarragem(694) 2 days ago [-]

I'm pretty sure LinkedIn does or used to do the same.

gpvos(2074) 2 days ago [-]

Still does. The apparently popular German payment system Sofortüberweisung (now run by Klarna) even requests the password of your bank account.

durnygbur(10000) 2 days ago [-]

yes LinkedIn was certainly using dark UI patterns to upload the contact list and send invites to everyone on it.

azimuth11(10000) 2 days ago [-]

There was a class action lawsuit against them (LinkedIn Lost it, iirc) for what they did. I believe they would try to connect you with any of your email contacts if you logged in with OAuth.

OrgNet(4010) 2 days ago [-]

LinkedIn was pretty bad, but Facebook was saying that your login information was only going to get used to verify your email. smt88 has a good analogy up there

maxheadroom(10000) 2 days ago [-]

>Facebook says that it didn't mean to upload these contacts

How can you not mean to? It's one thing to say that, were it something tangible, like paper, 'Sorry, mate. These pages snuck in with the others. Sorry about that. We'll pull it out. No worries.'

Pulling contacts and uploading them is not a passive action but takes active action.

>and is now in the process of deleting them.

So, the question must then be asked: How do they differentiate the sources of contacts associated with an account, unless they're logging that, as well? If they're not logging that, then how are they, presumably, deleting those contacts?

Are we taking bets on Facebook being in the news again, in a months' or so time, for being found to not have deleted them? :)

coldcode(3323) 2 days ago [-]

So Facebook has no QA or Facebook has QA no one listens to? I imagine the latter.

mannykannot(4043) 2 days ago [-]

> Pulling contacts and uploading them is not a passive action but takes active action.

Action such as 'accidentally' asking for email passwords. It is quite remarkable how these accidents line up just so.

Grammar-checking programs should be flagging any use of 'accident', 'accidentally', 'unintended' and 'unintentionally' whenever they appear in the same sentence as 'Facebook' and are not within quotes.

javajosh(3720) 2 days ago [-]

>How can you not mean to?

Indeed. Expect the next headline to be, 'Facebook 'unintentionally failed to delete' 1.5M people's contacts, which they'd previously unintentionally uploaded'.

stemuk(4055) 2 days ago [-]

This seems like 'growth hacking' gone wrong. Facebook's growth has been loosing momentum for several year's now and it seems to me they are trying to make up for it by using every trick they have up their sleeves.

They might want to overthink their motto 'Move fast and break things'.

pluma(3780) 2 days ago [-]

> How can you not mean to?

It's my understanding that they used to do this entirely intentionally at one point via an 'import contacts from mail' feature, then they dropped the feature and now when they added the 'sign in with e-mail to verify your identity' feature someone reused the old code without being aware that it will also harvest the contacts and that they don't want that this time.

It's the opposite of 'privacy by default', basically.

hluska(3830) 2 days ago [-]

At some point, some government is going to have to step in and stop Facebook. Five years ago, I would not have believed that I would have supported government action. Now, I'm afraid for the future if there is no intervention.

swebs(3841) 2 days ago [-]

Have you read the Prism papers? Governments love this mass data collection since it makes their job so much easier.

Cthulhu_(10000) 2 days ago [-]

I don't know if you've followed the news, but multiple governments have investigated, sued and fined Facebook. A quick Google indicates Facebook may end up paying 1.6 billion to the EU. The UK is doing an investigation too, with FB's impact on the Brexit referendum, as well as the whole Cambridge Analytica thing.

If you're thinking Facebook is getting away with it, you're wrong.

Of course, they're mainly getting fined; if that isn't harsh enough punishment then I don't know what to do next, that's dangerous territory.

M2Ys4U(3935) 2 days ago [-]

'unintentionally'. Yeah, sure, whatever you say Zuck.

temporar(10000) 2 days ago [-]

There is so much double speak right now in Corporate America, its sickening.

moogly(10000) 1 day ago [-]

At least they seem to have a thesaurus at hand. Last month they 'mistakenly' did something.

https://nordic.businessinsider.com/facebook-old-posts-mark-z...

kerng(3393) 2 days ago [-]

Phones need better features to entirely prevent these things - so apps can't trick the user. I want no application to have access, something like Incognito mode for all apps basically. The permission dialogues are typically not very helpful to make a meaningful decision and apps don't function at all without certain permissions. So why not allow to 'fake' contacts,storage,location,etc...

Majority of apps are just spyware anyware.

low_key(10000) 2 days ago [-]

This could be done previously with on custom Android builds with XPrivacy (an XPosed module).

It worked quite well for a long time, but tended to be quite a burden to maintain through OS updates. Starting with Oreo or so it no longer worked, but there was another similar module that had much of its functionality.

It could even go as far as exposing a subset of your address book to an app. So, for example, when I wanted to use WhatsApp I could just show it the 3 contacts that I wanted it to see.

The operating system should sandbox every app and by default provide it fake data for everything. The user should say what they really want to allow the app to access.

I eventually switched to an iPhone and just don't install many apps.

nacs(3788) 2 days ago [-]

iOS has a prompt before your address book/contacts are shared with any app and apps will always work without it (required by dev guidelines).

However note that this article is not referring to the Facebook mobile app accessing the mobile contacts -- this is about their service logging into a person's email service (like GMail) and downloading their email contacts.

oldjokes(10000) 2 days ago [-]

Are they just flat out teaching people how to be super deceptive and how to tactically play stupid in MBA programs nowadays?

rchaud(10000) 2 days ago [-]

You're really attributing this to the MBA boogeyman, when Zuck the l33t coder dropout almost certainly had to sign off on a move like this?

throwaway_9168(10000) 2 days ago [-]

Since FB has gone out of their way to weaponize 'friendship', my suggestion to everyone who actually likes to have some standards in their life and don't like to be manipulated like that is simple. Just do it back to them. 'Unfriend' (IRL) everyone you know who works at Facebook and tell them you will 'friend' them back once they leave the company.

malms(10000) 2 days ago [-]

Or maybe people could learn again 'personal responsibility' and realize that everything they give to facebook is exactly like giving oyur life to any compagny like Coca Cola, and that these compagny can do pretty much anything with it within the limits of the 'laws that are actually enforced', whose number is pretty much 0.

Stop complaining, start taking responsibility.

Lammy(3995) 2 days ago [-]

Not everyone has the luxury to be able to discard their health insurance or work visa on a whim to suit your opinion of their employer.

thecatspaw(4115) 2 days ago [-]

I would never friend someone again who dropped me because of my employer

callinyouin(3797) 2 days ago [-]

Judging from other comments this is an unpopular idea, but why does business get to be some sort of quasi morality-free zone where nobody has to take responsibility for anything? If a friend works for a company that engages in activity that I find morally reprehensible, why shouldn't this affect our friendship? I think our society could really benefit from a little accountability, so in lieu of regulations and laws protecting us from corporations I think protecting our social circles from people who endorse the bad actions of their employers because 'it's just business' is perfectly okay.

I see other comments talking about personal responsibility, but in the case of FB the notion of a company selling their data is too abstract to clearly understand the risks/consequences for many. Should we put no responsibility on corporations to act civilly or at least legally? Should one not have a personal responsibility to engage only with corporate entities that behave civilly/lawfully/etc? I really don't understand this mindset.

nathan_long(3898) 2 days ago [-]

I don't recall ever hearing that Facebook made a mistake which decreased the amount of data they collected or their usage thereof. Can anyone provide an example?

hhanesand(10000) 2 days ago [-]

I get what you're getting at here, but I don't think it would be reported in the general media as it's not a privacy violation.

jefftk(10000) 2 days ago [-]

I'm sure Facebook has had bugs that broke various forms of data collection, or missed data they could have collected. We wouldn't hear about it, but it would be surprising if it hadn't happened.

peteretep(1558) 2 days ago [-]

Honestly I don't understand why Zuck doesn't sell up at Facebook and use his considerable money and brains to move to philanthropy, like billg. His personal brand is going to continue to dive while he's the face of this bullshit.

chimen(4067) 2 days ago [-]

Ego.

nathan_long(3898) 2 days ago [-]

Presumably he enjoys running a powerful business built on privacy violation more than he thinks he'd enjoy philanthropy.

The BS you refer to is his creation, not some accidental thing that happened to occur in his company without his intention.

hopler(10000) 2 days ago [-]

Zuckerberg, like billg, has no interest in philanthropy until his mortality and his wife are is staring him the face putting the fear of the afterlife in him.





Historical Discussions: How the Boeing 737 Max disaster looks to a software Developer (April 18, 2019: 501 points)

(506) How the Boeing 737 Max disaster looks to a software Developer

506 points 1 day ago by pross356 in 3669th position

spectrum.ieee.org | Estimated reading time – 31 minutes | comments | anchor

Photo: Jemal Countess/Getty Images
This is part of the wreckage of Ethiopian Airlines Flight ET302, a Boeing 737 Max airliner that crashed on 11 March in Bishoftu, Ethiopia, killing all 157 passengers and crew.

I have been a pilot for 30 years, a software developer for more than 40. I have written extensively about both aviation and software engineering. Now it's time for me to write about both together.

The Boeing 737 Max has been in the news because of two crashes, practically back to back and involving brand new airplanes. In an industry that relies more than anything on the appearance of total control, total safety, these two crashes pose as close to an existential risk as you can get. Though airliner passenger death rates have fallen over the decades, that achievement is no reason for complacency.

The 737 first appeared in 1967, when I was 3 years old. Back then it was a smallish aircraft with smallish engines and relatively simple systems. Airlines (especially Southwest) loved it because of its simplicity, reliability, and flexibility. Not to mention the fact that it could be flown by a two-person cockpit crew—as opposed to the three or four of previous airliners—which made it a significant cost saver. Over the years, market and technological forces pushed the 737 into ever-larger versions with increasing electronic and mechanical complexity. This is not, by any means, unique to the 737. Airliners constitute enormous capital investments both for the industries that make them and the customers who buy them, and they all go through a similar growth process.

Most of those market and technical forces are on the side of economics, not safety. They work as allies to relentlessly drive down what the industry calls "seat-mile costs"—the cost of flying a seat from one point to another.

Much had to do with the engines themselves. The principle of Carnot efficiency dictates that the larger and hotter you can make any heat engine, the more efficient it becomes. That's as true for jet engines as it is for chainsaw engines.

It's as simple as that. The most effective way to make an engine use less fuel
per unit of power produced is to make it larger. That's why the Lycoming O-360 engine in my Cessna has pistons the size of dinner plates. That's why
marine diesel engines stand three stories tall. And that's why Boeing wanted to put the huge CFM International LEAP engine in its latest version of the 737.

There was just one little problem: The original 737 had (by today's standards) tiny little engines, which easily cleared the ground beneath the wings. As the 737 grew and was fitted with bigger engines, the clearance between the engines and the ground started to get a little...um, tight.

Illustration: Norebbo.com
By substituting a larger engine, Boeing changed the intrinsic aerodynamic nature of the 737 airliner.

Various hacks (as we would call them in the software industry) were developed. One of the most noticeable to the public was changing the shape of the engine intakes from circular to oval, the better to clear the ground.

With the 737 Max, the situation became critical. The engines on the original 737 had a fan diameter (that of the intake blades on the engine) of just 100 centimeters (40 inches); those planned for the 737 Max have 176 cm. That's a centerline difference of well over 30 cm (a foot), and you couldn't "ovalize" the intake enough to hang the new engines beneath the wing without scraping the ground.

The solution was to extend the engine up and well in front of the wing. However, doing so also meant that the centerline of the engine's thrust changed. Now, when the pilots applied power to the engine, the aircraft would have a significant propensity to "pitch up," or raise its nose.

The angle of attack is the angle between the wings and the airflow over the wings. Think of sticking your hand out of a car window on the highway. If your hand is level, you have a low angle of attack; if your hand is pitched up, you have a high angle of attack. When the angle of attack is great enough, the wing enters what's called an aerodynamic stall. You can feel the same thing with your hand out the window: As you rotate your hand, your arm wants to move up like a wing more and more until you stall your hand, at which point your arm wants to flop down on the car door.

This propensity to pitch up with power application thereby increased the risk that the airplane could stall when the pilots "punched it" (as my son likes to say). It's particularly likely to happen if the airplane is flying slowly.

Worse still, because the engine nacelles were so far in front of the wing and so large, a power increase will cause them to actually produce lift, particularly at high angles of attack. So the nacelles make a bad problem worse.

I'll say it again: In the 737 Max, the engine nacelles themselves can, at high angles of attack, work as a wing and produce lift. And the lift they produce is well ahead of the wing's center of lift, meaning the nacelles will cause the 737 Max at a high angle of attack to go to a higher angle of attack. This is aerodynamic malpractice of the worst kind.

Pitch changes with power changes are common in aircraft. Even my little Cessna pitches up a bit when power is applied. Pilots train for this problem and are used to it. Nevertheless, there are limits to what safety regulators will allow and to what pilots will put up with.

Pitch changes with increasing angle of attack, however, are quite another thing. An airplane approaching an aerodynamic stall cannot, under any circumstances, have a tendency to go further into the stall. This is called "dynamic instability," and the only airplanes that exhibit that characteristic—fighter jets—are also fitted with ejection seats.

Everyone in the aviation community wants an airplane that flies as simply and as naturally as possible. That means that conditions should not change markedly, there should be no significant roll, no significant pitch change, no nothing when the pilot is adding power, lowering the flaps, or extending the landing gear.

The airframe, the hardware, should get it right the first time and not need a lot of added bells and whistles to fly predictably. This has been an aviation canon from the day the Wright brothers first flew at Kitty Hawk.

Apparently the 737 Max pitched up a bit too much for comfort on power application as well as at already-high angles of attack. It violated that most ancient of aviation canons and probably violated the certification criteria of the U.S. Federal Aviation Administration. But instead of going back to the drawing board and getting the airframe hardware right (more on that below), Boeing relied on something called the "Maneuvering Characteristics Augmentation System," or MCAS.

Boeing's solution to its hardware problem was software.

I will leave a discussion of the corporatization of the aviation lexicon for another article, but let's just say another term might be the "Cheap way to prevent a stall when the pilots punch it," or CWTPASWTPPI, system. Hmm. Perhaps MCAS is better, after all.

MCAS is certainly much less expensive than extensively modifying the airframe to accommodate the larger engines. Such an airframe modification would have meant things like longer landing gear (which might not then fit in the fuselage when retracted), more wing dihedral (upward bend), and so forth. All of those hardware changes would be horribly expensive.

"Everything about the design and manufacture of the Max was done to preserve the myth that 'it's just a 737.' Recertifying it as a new aircraft would have taken years and millions of dollars. In fact, the pilot licensed to fly the 737 in 1967 is still licensed to fly all subsequent versions of the 737." —Feedback on an earlier draft of this article from a 737 pilot for a major airline

What's worse, those changes could be extensive enough to require not only that the FAA recertify the 737 but that Boeing build an entirely new aircraft. Now we're talking real money, both for the manufacturer as well as the manufacturer's customers.

That's because the major selling point of the 737 Max is that it is just a 737, and any pilot who has flown other 737s can fly a 737 Max without expensive training, without recertification, without another type of rating. Airlines—Southwest is a prominent example—tend to go for one "standard" airplane. They want to have one airplane that all their pilots can fly because that makes both pilots and airplanes fungible, maximizing flexibility and minimizing costs.

It all comes down to money, and in this case, MCAS was the way for both Boeing and its customers to keep the money flowing in the right direction. The necessity to insist that the 737 Max was no different in flying characteristics, no different in systems, from any other 737 was the key to the 737 Max's fleet fungibility. That's probably also the reason why the documentation about the MCAS system was kept on the down-low.

Put in a change with too much visibility, particularly a change to the aircraft's operating handbook or to pilot training, and someone—probably a pilot—would have piped up and said, "Hey. This doesn't look like a 737 anymore." And then the money would flow the wrong way.

As I explained, you can do your own angle-of-attack experiments just by putting your hand out a car door window and rotating it. It turns out that sophisticated aircraft have what is essentially the mechanical equivalent of a hand out the window: the angle-of-attack sensor.

You may have noticed this sensor when boarding a plane. There are usually two of them, one on either side of the plane, and usually just below the pilot's windows. Don't confuse them with the pitot tubes (we'll get to those later). The angle-of-attack sensors look like wind vanes, whereas the pitot tubes look like, well, tubes.

Angle-of-attack sensors look like wind vanes because that's exactly what they are. They are mechanical hands designed to rotate in response to changes in that angle of attack.

The pitot tubes measure how much the air is "pressing" against the airplane, whereas the angle-of-attack sensors measure what direction that air is coming from. Because they measure air pressure, the pitot tubes are used to determine the aircraft's speed through the air. The angle-of-attack sensors measure the aircraft's direction relative to that air.

There are two sets of angle-of-attack sensors and two sets of pitot tubes, one set on either side of the fuselage. Normal usage is to have the set on the pilot's side feed the instruments on the pilot's side and the set on the copilot's side feed the instruments on the copilot's side. That gives a state of natural redundancy in instrumentation that can be easily cross-checked by either pilot. If the copilot thinks his airspeed indicator is acting up, he can look over to the pilot's airspeed indicator and see if it agrees. If not, both pilot and copilot engage in a bit of triage to determine which instrument is profane and which is sacred.

Long ago there was a joke that in the future planes would fly themselves, and the only thing in the cockpit would be a pilot and a dog. The pilot's job was to make the passengers comfortable that someone was up front. The dog's job was to bite the pilot if he tried to touch anything.

On the 737, Boeing not only included the requisite redundancy in instrumentation and sensors, it also included redundant flight computers—one on the pilot's side, the other on the copilot's side. The flight computers do a lot of things, but their main job is to fly the plane when commanded to do so and to make sure the human pilots don't do anything wrong when they're flying it. The latter is called "envelope protection."

Let's just call it what it is: the bitey dog.

Let's review what the MCAS does: It pushes the nose of the plane down when the system thinks the plane might exceed its angle-of-attack limits; it does so to avoid an aerodynamic stall. Boeing put MCAS into the 737 Max because the larger engines and their placement make a stall more likely in a 737 Max than in previous 737 models.

When MCAS senses that the angle of attack is too high, it commands the aircraft's trim system (the system that makes the plane go up or down) to lower the nose. It also does something else: It pushes the pilot's control columns (the things the pilots pull or push on to raise or lower the aircraft's nose) downward.

In the 737 Max, like most modern airliners and most modern cars, everything is monitored by computer, if not directly controlled by computer. In many cases, there are no actual mechanical connections (cables, push tubes, hydraulic lines) between the pilot's controls and the things on the wings, rudder, and so forth that actually make the plane move. And, even where there are mechanical connections, it's up to the computer to determine if the pilots are engaged in good decision making (that's the bitey dog again).

But it's also important that the pilots get physical feedback about what is going on. In the old days, when cables connected the pilot's controls to the flying surfaces, you had to pull up, hard, if the airplane was trimmed to descend. You had to push, hard, if the airplane was trimmed to ascend. With computer oversight there is a loss of natural sense in the controls. In the 737 Max, there is no real "natural feel."

True, the 737 does employ redundant hydraulic systems, and those systems do link the pilot's movement of the controls to the action of the ailerons and other parts of the airplane. But those hydraulic systems are powerful, and they do not give the pilot direct feedback from the aerodynamic forces that are acting on the ailerons. There is only an artificial feel, a feeling that the computer wants the pilots to feel. And sometimes, it doesn't feel so great.


When the flight computer trims the airplane to descend, because the MCAS system thinks it's about to stall, a set of motors and jacks push the pilot's control columns forward. It turns out that the flight management computer can put a lot of force into that column—indeed, so much force that a human pilot can quickly become exhausted trying to pull the column back, trying to tell the computer that this really, really should not be happening.

Illustration: Norebbo.com
The antistall system depended crucially on sensors that are installed on each side of the airliner—but the system consulted only the sensor on one side.

Indeed, not letting the pilot regain control by pulling back on the column was an explicit design decision. Because if the pilots could pull up the nose when MCAS said it should go down, why have MCAS at all?

MCAS is implemented in the flight management computer, even at times when the autopilot is turned off, when the pilots think they are flying the plane. In a fight between the flight management computer and human pilots over who is in charge, the computer will bite humans until they give up and (literally) die.

Finally, there's the need to keep the very existence of the MCAS system on the hush-hush lest someone say, "Hey, this isn't your father's 737," and bank accounts start to suffer.

The flight management computer is a computer. What that means is that it's not full of aluminum bits, cables, fuel lines, or all the other accoutrements of aviation. It's full of lines of code. And that's where things get dangerous.

Those lines of code were no doubt created by people at the direction of managers. Neither such coders nor their managers are as in touch with the particular culture and mores of the aviation world as much as the people who are down on the factory floor, riveting wings on, designing control yokes, and fitting landing gears. Those people have decades of institutional memory about what has worked in the past and what has not worked. Software people do not.

In the 737 Max, only one of the flight management computers is active at a time—either the pilot's computer or the copilot's computer. And the active computer takes inputs only from the sensors on its own side of the aircraft.

When the two computers disagree, the solution for the humans in the cockpit is 
to look across the control panel to see
 what the other instruments are saying and then sort it out. In the Boeing system, the flight
 management computer does not "look 
across" at the other instruments. It 
believes only the instruments on its side. It doesn't go old-school. It's modern. It's software.

This means is that if a particular angle-of-attack sensor goes haywire—which happens all the time in a machine that alternates from one extreme environment to another, vibrating and shaking all the way—the flight management computer just believes it.

It gets even worse. There are several other instruments that can be used to determine things like angle of attack, either directly or indirectly, such as the pitot tubes, the artificial horizons, etc. All of these things would be cross-checked by a human pilot to quickly diagnose a faulty angle-of-attack sensor.

In a pinch, a human pilot could just look out the windshield to confirm visually and directly that, no, the aircraft is not pitched up dangerously. That's the ultimate check and should go directly to the pilot's ultimate sovereignty. Unfortunately, the current implementation of MCAS denies that sovereignty. It denies the pilots the ability to respond to what's before their own eyes.

Like someone with narcissistic personality disorder, MCAS gaslights the pilots. And it turns out badly for everyone. "Raise the nose, HAL." "I'm sorry, Dave, I'm afraid I can't do that."

In the MCAS system, the flight management computer is blind to any other evidence that it is wrong, including what the pilot sees with his own eyes and what he does when he desperately tries to pull back on the robotic control columns that are biting him, and his passengers, to death.

In the old days, the FAA had armies of aviation engineers in its employ. Those FAA employees worked side by side with the airplane manufacturers to determine that an airplane was safe and could be certified as airworthy.

As airplanes became more complex and the gulf between what the FAA could pay and what an aircraft manufacturer could pay grew larger, more and more of those engineers migrated from the public to the private sector. Soon the FAA had no in-house ability to determine if a particular airplane's design and manufacture were safe. So the FAA said to the airplane manufacturers, "Why don't you just have your people tell us if your designs are safe?"

The airplane manufacturers said, "Sounds good to us." The FAA said, "And say hi to Joe, we miss him."

Thus was born the concept of the "Designated Engineering Representative," or DER. DERs are people in the employ of the airplane manufacturers, the engine manufacturers, and the software developers who certify to the FAA that it's all good.

Now this is not quite as sinister a conflict of interest as it sounds. It is in nobody's interest that airplanes crash. The industry absolutely relies on the public trust, and every crash is an existential threat to the industry. No manufacturer is going to employ DERs that just pencil-whip the paperwork. On the other hand, though, after a long day and after the assurance of some software folks, they might just take their word that things will be okay.

It is astounding that no one who wrote the MCAS software for the 737 Max seems even to have raised the possibility of using multiple inputs, including the opposite angle-of-attack sensor, in the computer's determination of an impending stall. As a lifetime member of the software development fraternity, I don't know what toxic combination of inexperience, hubris, or lack of cultural understanding led to this mistake.

But I do know that it's indicative of a much deeper problem. The people who wrote the code for the original MCAS system were obviously terribly far out of their league and did not know it. How can they can implement a software fix, much less give us any comfort that the rest of the flight management software is reliable?

So Boeing produced a dynamically unstable airframe, the 737 Max. That is big strike No. 1. Boeing then tried to mask the 737's dynamic instability with a software system. Big strike No. 2. Finally, the software relied on systems known for their propensity to fail (angle-of-attack indicators) and did not appear to include even rudimentary provisions to cross-check the outputs of the angle-of-attack sensor against other sensors, or even the other angle-of-attack sensor. Big strike No. 3.

None of the above should have passed muster. None of the above should have passed the "OK" pencil of the most junior engineering staff, much less a DER.

That's not a big strike. That's a political, social, economic, and technical sin.

It just so happens that, during the timeframe between the first 737 Max crash and the most recent 737 crash, I'd had the occasion to upgrade and install a brand-new digital autopilot in my own aircraft. I own a 1979 Cess