Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

May 20, 2019 08:12

Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: I Charged $18k for a Static HTML Page (May 15, 2019: 1130 points)

(1131) I Charged $18k for a Static HTML Page

1131 points 5 days ago by firefoxd in 2176th position

idiallo.com | Estimated reading time – 9 minutes | comments | anchor

Not too long ago, I made a living working as a contractor where I would hop from project to project. Some were short term where I would work for a week and quickly deliver my service. Others lasted a couple months where I would make enough money to take some time off. I preferred the short ones because they allowed me to charge a much higher rate for a quick job. Not only I felt like my own boss, but I also felt like I didn't have to work too hard to make a decent living. My highest rates were still reasonable, and I always delivered high quality service. That was until I landed a gig with a large company.

This company contacted me in urgency and the manager told me they needed someone right away. Someone who required minimum training for maximum performance. For better or worse, that was my motto. This project was exactly the type of work I liked. It was short, fast, and it paid well.

After negotiating a decent rate, I received an email with the instructions. They gave me more context for the urgency. Their developer left without prior warning and never updated anyone on the status of his project.

We need your full undivided attention to complete this project. For the duration of the contract, you will work exclusively with us to deliver result in a timely manner. We plan to compensate you for the trouble.

The instructions were simple: Read the requirements then come up with an estimate of how long it would take to complete the project. This was one of the easier projects I have encountered in my career. It was an HTML page with some minor animations and a few embedded videos. I spent the evening studying the requirements and simulating the implementation in my head. Over the years, I've learned not to write any code for a client until I have a guarantee of pay.

I determined that this project would be a day's worth of work. But to be cautious, I quoted 20 hours with a rough total of $1500. It was a single HTML page after all, and I can only charge them so much. They asked me to come on site to their satellite office 25 miles away. I would have to drive there for the 3 days I would be working for them.

The next day, I arrived at the satellite office. It was in a shopping center where a secret door led to a secret world where a few workers where churning quietly in their cubicles. The receptionist presented me with a brand new MacBook Pro that I had to set up from scratch. I do prefer using a company's laptop because they often require contractors to install suspicious software.

I spent the day downloading my toolkit, setting up email, ssh keys, and requesting invites to services. In other words, I got nothing done. This is why I quoted 20 hours, I lost 8 hours of my estimated time doing busy work.

The next day, I was ready to get down to business. Armed with the MacBook Pro, I sent an email to the manager. I told him that I was ready to work and that I was waiting for the aforementioned assets. That day, I stayed in my cubicle under a softly buzzing light, twiddling my fingers until the sun went down.

I did the math again. According to my estimate, I had 4 hours left to do the job, which was not so unrealistic for a single HTML page. But needless to say, the next day, I spent those remaining 4 hours in a company sponsored lunch where I ate very well and mingled with other employees.

When the time expired, I made sure to send the manager another email, to let him know that I had been present in the company only I had not received the assets I needed to do the job. That email, of course, was ignored.

The following Monday, I hesitantly drove the 25 miles. To my surprise, the manager had come down to the satellite office where he enthusiastically greeted me. He was a nice easy-going guy in his mid thirties. I was confused. He didn't have the urgency tone he had on the phone when he hired me. We had a friendly conversation where no work was mentioned. Later, we went down to lunch where he paid for my meal. It was a good day. No work was done.

Call me a creature of habit, but if you feed me and pamper me everyday, I get used to it. It turned into a routine. I'd come to work, spend some time online reading and watching videos. I'd send one email a day, so they know I am around. Then I'd go get lunch and hangout with whomever had an interesting story to share. At the end of the day, I'd stand up, stretch, let out a well deserved yawn, then drive home.

I got used to it. In fact, I was expecting it. It was a little disappointing when I finally got an email with a link that pointed to the assets I needed for the job. I came back down to earth, and put on my working face. Only, after spending a few minutes looking through the zip file, I noticed that it was missing the bulk of what I needed. The designer had sent me some Adobe Illustrator files, and I couldn't open it on the MacBook.

I replied to the email explaining my concerns and bundled a few other questions to save time. At that point, my quoted 20 hours time had long expired. I wanted to get this job over with already. Shortly after I clicked on send, I received an email. All it said was: 'Adding Alex to the thread,' and Alex was CC'd to the email. Then Alex replied where he added Steve to the thread. Steve replied saying that Michelle was a designer and she would know more about this. Michelle auto responded saying that she was on vacation and that all inquiries should be directed to her manager. Her manager replied asking 'Who is Ibrahim?' My manager replied excusing himself for not introducing me.

As a contractor, I am usually in and out of a company before people notice that I work there. Here, I received a flood of emails welcoming me aboard. The chain of emails continued for a while and I was forced to answer to those awfully nice messages. Some people were eager to meet me in person. They got a little disappointed when I said that I was all the way down in California. And jealous, they said they were jealous of the beautiful weather.

They used courtesy to ignore my emails. They used CC to deflect my questions. They used spam to dismiss anything I asked. I spent my days like an archaeologist digging through the deep trenches of emails, hoping to find answers to my questions. You can imagine the level of impostor syndrome I felt every time I remembered that my only task was to build a single static HTML page. The overestimated 20 hours project turned into a 7 weeks adventure where I enjoyed free lunches, drove 50 miles everyday, and dug through emails.

When I finally completed the project, I sent it to the team on github. All great adventures must come to an end. But shortly after, I received an invitation to have my code reviewed by the whole team on Google Hangout. I had spent more than a month building a single static HTML page and now the entire team would have to critique my work? In my defense, there was also some JavaScript interactions, and it was responsive, and it also had CSS animations... Impostor.

Of course, the video meeting was rescheduled a few times. When it finally happened, my work and I were not the subject of the meeting. They were all sitting in the same room somewhere in New York and talked for a while like a tight knit group. In fact, all they ever said about the project was:

Person 1: Hey is anyone working on that sponsored page? Person 2: Yeah, I think it's done. Person 1: Great, I'll merge it tonight.

When I went home that night, I realized that I was facing another challenge. I had been working at this company for 7 weeks, and my original quote was for $1,500. That's roughly the equivalent of $11,100 a year or $214 a week. Or even better, it was $5.35 an hour.

This barely covered my transportation. So, I sent them an invoice where I quoted them for 7 weeks of work at the original hourly rate. The total amounted to $18,000. I was ashamed of course, but what else was I supposed to do?

Just like I expected, I got no reply. If there is something that all large companies have in common, it's that they are not very eager to pay their bills on time. I felt like a cheat charging so much for such a simple job, but this was not a charity. I had been driving 50 miles everyday to do the job, if the job was not getting done it was not for my lack of trying. It was for their slow responses.

I got an answer the following week. It was a cold email from the manager where he broke down every day I worked into hourly blocks. Then he highlighted those I worked on and marked a one hour lunch break each day. At the end he made some calculations with our agreed upon hourly rate.

Apparently, I was wrong. I had miscalculated the total. After adjustment, the total amount they owed me was $21,000.

Please confirm the readjusted hours so accounting can write you a check.

I quickly confirmed these hours.

All Comments: [-] | anchor

adamqureshi(3302) 5 days ago [-]

We did some agency work Ad agency / sub contract) in NYC. I said to the agency we will deliver HTML/CSS/JS and its $12,500 for the site / pages. They came back with NO our budget is $25k and we will pay that. Im like ok wire be the bread, here is my ACH ( They did) I delivered the HTML/CSS/JS. They came back with but who is gonna do the API work with our back end guy. Seems they could not make a distinction connecting the front end to the back end. I can only do HTML/CSS i can't do the API stuff and i told them in writing very clearly. they had a back end guy we delivered the work to. I got played too many times NOT to get my money upfront and send a bill when I use up the time. They said sorry for the confusion , ok so how much for the api and will cut you a check from accounting. Accounting has no clue what marketing is buying.

LocalPCGuy(4126) 5 days ago [-]

In my experience, where the Front End stops and the Back End starts is always a fuzzy line for non-technical clients (even if that have devs on staff). Especially when you're talking to the marketing department and devs work in IT or another similarly separated department. It pays to have the discussion and get it in writing up front (plus, if you CAN do API work, it may provide additional revenue that you didn't necessarily know was available).

grecy(2103) 5 days ago [-]

I worked for a large company who wanted their public website re done to be 'mobile friendly' (in 2015).

It's Drupal, with a custom theme. Everything that was actually 'tricky' was de-scoped, so it wound up being something my university buddies and I would have charged $5k for back in the day, probably would have charged $30k these days.

That project cost well over $1mil. For a Drupal site with a custom 'Mobile friendly' theme.

JakeTheAndroid(10000) 5 days ago [-]

When I worked for a non-profit they paid like a quarter million dollars for a Drupal site with some slight modifications that never even got completed. I had to trudge through months of debugging and solving problems, and ultimately I couldn't be bothered with Drupal Core so we brought in some contractors to solve the biggest issues while I worked on rebuilding the whole stack. Drupal knowledge seems to pay well, they were able to charge us $150 an hour at their non-profit discount.

ErotemeObelus(4062) 5 days ago [-]


sctb(2638) 5 days ago [-]

Could you please post thoughtfully and informatively? This is a conversation website not a drive-by snark site.


rhacker(4119) 5 days ago [-]

Sometimes I do kinda wish to get back into consulting because of situations like these. In almost all cases the work is never used, but instead of bailing the company out, it's really just bailing some situation out. It's all internal politics.

Back when I was in my 20s during my walks from the light rail to the office I would have constant thoughts about how idiotic programming and office work is. The ONLY reason it all exists is because people can't trust each other. And that maybe trust isn't really necessary in a society that lessens personal ownership and has more of a share-all-but-do-your-part approach.

It's literally so unfair the way we've segmented people into knowledge zones. Bankers know how easy it is to double money without taking risk. Programmers know how to pretend something will take 3 weeks when it will only take 2 days.

Recently I've learned how easy it is to set up solar. I cry every time I hear someone get trapped in some solar contract when they sell their house. It's literally mind warping how fucked up every aspect of this economy is.

cmcginty(4118) 5 days ago [-]

Taking a left turn here, but how is it 'easy' to setup solar? At the minimum you'll need a master electrician to sign-off on your install and might be hard to find after-the-fact.

IkmoIkmo(10000) 5 days ago [-]

I've got similar experiences working for corporate clients, but it was legal work, not tech stuff. It was a bit more complex than the equivalent of a static HTML, but something that anyone with an IQ of > 90 could learn in less than half a year.

There were days when I'd charge clients $15k... for a day's work. This wouldn't have been possible if I worked on-site. But I was essentially completing $15k of contracted work in a single day, which was sold as a fixed-fee in return for a legal report. The type of work that should cost maybe $200 in total.

Corporations get kind of crazy, there's extreme focus on some areas (mainly, those with KPIs and KPI owners attached), and extreme nonchalance on others. They're so big that there's just lots of insane things like this that slip through.

bduerst(10000) 5 days ago [-]

These types of gigs are more about who you know rather than the work that gets done. I'm guessing that you didn't just cold interview for this work, right?

pault(4063) 5 days ago [-]

> $15k of contracted work in a single day, which was sold as a fixed-fee in return for a legal report. The type of work that should cost maybe $200 in total.

This sounds like a SaaS waiting to happen.

TomVDB(10000) 5 days ago [-]

There was a time during the golden dotcom era when my manager scheduled a monthly management meeting across the Atlantic, which required me and 2 colleagues flying all the way over for a meeting that lasted about 4 hours.

Business class plane tickets, 2 nights in a nice hotel, rental car, dinner at very fancy restaurant.

Meanwhile, that same 100k+ employee company wasn't able to set up email fast enough for new employees, so some new hires had to use hotmail(!) for weeks before they were in the system.

mettamage(3738) 5 days ago [-]

I'd like to ask a couple of questions about this. My email is in my profile if you'd like to answer them.

9nGQluzmnq3M(10000) 4 days ago [-]

Randomly enough, the writer of this previously achieved HN fame when he was unstoppably fired by a machine:


qnsi(4127) 5 days ago [-]

Amazing. It kind of sound unbelievable, but I don't have that much experience working with big corporations.

This should be read by everyone concerned about, how few people in a startup can beat big corporations

thrower123(3377) 5 days ago [-]

$18000 is the kind of petty cash that a lot of departments have lying around in their budgets at the end of a quarter. Especially if the budgeting process is of the use-it-or-lose-it variety, there can be a push to buy things that are unnecessary at the end of quarters or fiscal years.

I've been involved in more than one project where a company bought a largish subscription license for a product, and never got anybody lined up to actually deploy it before the licensing ran out - I assume whoever was in charge of that initiative got laid off or took a new job and it fell through the cracks.

peshooo(10000) 5 days ago [-]

I don't know how it is in big corporations in America. But working as a consultant in Western Europe, I have been more than once in a situation where I'm starting on a project and I have to wait more than 3-4 months for things like accounts, access/permissions, laptop. Its pretty pathetic at the dailies, to report every day that I'm still waiting for these things.

noir_lord(4039) 5 days ago [-]

Big corporations are batshit.

One division will be counting paperclips and another dropping $50k on a machine no one needs to do their job.

It entirely comes down to management in each place.

joshvm(4128) 5 days ago [-]

It sounds believable to me. I've had some interactions with big companies and you have to readjust what you think is a reasonable price for something.

I work with hardware and you'd be amazed how much people will spend without batting an eyelid (eg tens of thousands on a single instrument). In some cases they'll even remark at how inexpensive what you've offered is.

For example, the thermal cameras I work with are now consumer available for $5k. A decade ago you'd easily spend an order of magnitude more for the same sort of performance. And companies would happily fork out for it.

Bear in mind this company just lost a developer. The overheads for that member of staff alone, for two months, probably exceed $20k in the US.

docker_up(3277) 5 days ago [-]

During the early days of 'intranets', I worked at a very large company akin to a government organization. I thought we could save paper by putting our reports on the intranet instead of printing it out. There was one particularly important report that had needed to be printed every week, and it seemed like a good candidate.

I called the person who needed the report, but they said they didn't need it, and passed me to the person who requested it from them. I called the next person, and they passed me onto another person. I followed this about 5 people deep until I found one person who told me that they didn't need that report at all.

I left that company within 2 months of joining.

piptastic(10000) 5 days ago [-]

The author basically just had a job for a while. They aren't charging 18K for a web page, they're charging 18K to commute/be there for 7 weeks.

A lot of big companies aren't paying for output, they're paying for butts in seats. Why they do this has been discussed somewhat already in these comments.

biztos(4006) 4 days ago [-]

> they're paying for butts in seats

There's also the case, which IMO is the standard case, that the company is trying to pay for output but it's a long and winding road from the butt in the seat to the output to the sale.

Within that, if you're say a dev manager, and it's really hard to get head count allocated, then it can be totally rational to keep an idle butt in a paid-for seat so you don't lose the seat.

Even if the Mr Idlebutt is terrible at his job you have at least some possibility of replacing him later on, when you have a need for some work to be done and probably wouldn't easily get a new seat allocated for it.

Not only is it nearly impossible to establish a direct relationship between any particular butt-in-seat and your budget, in principle it's probably a good thing to operate with a little excess capacity.

Until the dream of the Fully Fungible Knowledge Worker is achieved, which it won't be, this is a lot more rational than the implied waste would lead you to believe. Of course this doesn't factor in morale impact...

titanomachy(10000) 4 days ago [-]

Yeah this doesn't seem like such a great deal when you consider that plenty of companies will pay that much, plus full benefits, for a full-time employee. And two months with only one small deliverable isn't that strange at a large company.

testplzignore(10000) 5 days ago [-]

Anyone else think $21k for this much time is too low? That's $156k a year assuming full-time hours which a contractor probably isn't going to get. Plus the guy lives in California. Plus health insurance.

What's a typical rate for this length of contracting work in California?

Hamuko(10000) 5 days ago [-]

Sounds like a decent pay if he spent as much just sitting on his ass and browsing the Internet as it sounded like.

greggyb(4094) 5 days ago [-]

Keep in mind the work being done. He emphasizes its simplicity. This is not a senior role requiring a huge amount of expertise.

Of course, he's probably charging too little in general. Everyone is.

tigroferoce(4054) 5 days ago [-]

At this point it is mandatory to remind us all the story of the forgotten employee. https://sites.google.com/site/forgottenemployee/

csunbird(10000) 4 days ago [-]

Is this real ?

zitterbewegung(319) 5 days ago [-]

I charged $500 for a single static HTML page in the early 2010s. I figured out free hosting using Google App Engine. Also I was on a Skype call the whole time and it had to be done by the end of the day with the client . I don't think the author did anything wrong .

Insanity(3647) 5 days ago [-]

the money is good, but the working condition seems to be bad :D I'd hate having to work with soemone constantly 'peering over my shoulder' even if just virtually.

ryanbrunner(4021) 5 days ago [-]

Entertaining story - although as advice to anyone reading it, running up the clock without proactively notifying people that you're going way beyond your original estimate is a very good way to make getting paid incredibly difficult.

atoav(10000) 4 days ago [-]

Precisely the oposite. People are usually willing to pay far more, if they are to blame for you beeing slow. If they only get the slightest hint of a feeling that you might be kicking the can down the road, they will blame it on you entirely.

This is, why you leave a paper/email trail.

everdev(3006) 4 days ago [-]

Just to clarify, he got paid $21k for 7 weeks of time. He just so happened to only deliver a static HTML page, but was on prem as requested during the engagement.

Still, it's a quick way to get a reputation for running up the clock.

tomcam(604) 5 days ago [-]

According to the article he emailed them daily.

jkingsbery(4127) 5 days ago [-]

Alternatively (or in addition): sending invoices periodically.

brundolf(2854) 5 days ago [-]

It was my impression that that's what he did. Of course the whole situation still could've made it difficult to get paid, but I don't think there's anything else he could've done.

im_new_here(10000) 5 days ago [-]

I don't disagree, but it's worth noting that at the end of it all, he billed them for $18k and they decided it was too little.

sxp62000(10000) 5 days ago [-]

Ha! Imagine how much a Deloitte-like company would've charged them for making that html page.

moron4hire(3323) 5 days ago [-]

10x just to come up with the PowerPoints saying they could do it for another 50x. Then still fuck it up.

atemerev(3243) 5 days ago [-]

Some static HTML pages (e.g. ICO landings when it was still a thing) could well be worth $18k and more.

readbeard(10000) 5 days ago [-]

Indeed! Consider a simple brochure-style marketing page, which is often a good use case for static HTML. Such a page should be centered around content, and that content (photography, illustration, copy, animation, etc.) may need to be specially built for the page and may be expensive to produce. Everything has to fit together flawlessly, telling a compelling story while staying on-brand. Performance needs to be great. And if the page is generating much revenue, it is easy to justify spending even more money in optimizations if they are likely to improve conversion rates.

I realize this has nothing to do with the ridiculous situation described in the article, but I do think it's worth pointing out that $18k is not at all an inherently ridiculous amount of money to charge for a static HTML page. In some cases, it may not be nearly enough.

jnaddef(10000) 5 days ago [-]

Spoilers : he did not actually get paid $18,000

dymk(10000) 5 days ago [-]

... he got paid $21,000

tamersalama(2745) 5 days ago [-]

Tangentially: I liked listening to the article, with the background soothing music, in the author's voice.

geophertz(10000) 4 days ago [-]

If only we could change the speed.

cosmodisk(10000) 5 days ago [-]

I used to live with this guy when we were studying. He got himself a job with the largest DIY retailer in the country.It was an office job with a relatively good salary. Eventually I moved out nad only saw him again after half a year or so. I asked how's the job to whoch he replied thst he quit after 3 month.I asked why..He said he used come to the office every day and ask around if he could help with anything.. Everyone was nice but kept saying no help is required.He got bored after 3 month of doing nothing and quit. He now runs his own business...

notahacker(10000) 5 days ago [-]

The early winners of the UK Apprentice were awarded £100k jobs as a prize, which didn't actually entail doing anything, because the only reason they had been created was as a prize for winning a TV competition. One of them sued for 'constructive dismissal' on the basis she felt that not creating any work for her was an attempt to force her to quit. She lost, presumably on the basis that most people given nothing to do on 3x average salary would either find something to do or consider themselves extremely lucky...

mixmastamyk(3422) 5 days ago [-]

Would love to see the finely-crafted, artisanal HTML and heirloom CSS.

mixmastamyk(3422) 3 days ago [-]

^ organic Javascript, ;)

crispyambulance(3799) 5 days ago [-]

My parents had been running their own tailor shop in the 80's, barely making ends meet, pulling in less than $20K a year.

It wasn't for lack of business, father was a master tailor trained in Italy and capable of elite bespoke craftsmanship. They had as much business as they could handle. The problem was that they were charging what they thought the work was worth rather than what their customers were willing to pay.

At some point, during the Reagan years, my mother had an epiphany and jacked up the prices massively, far beyond what my father thought was remotely reasonable. The result? Even more business, more pressure, more return customers. That put me and my brother through an expensive college.

There's something about high rates that makes customers feel more important, it's a status-thing and it also propels them to take you more seriously even if they have you do low-value stuff.

m463(10000) 4 days ago [-]

'I don't want some small-time amatuer working on my $5000 power suit!'

projektfu(4117) 4 days ago [-]

Bravo, but the article was about being placed at a typical rate on a 20 hour job that became a 300 hour job. Here's an article about what you're talking about: https://hbr.org/2017/10/why-you-should-charge-clients-more-t...

55555(4082) 4 days ago [-]

One day randomly in the 80s IIRC, Rolex tripled their prices, and they haven't lowered them since. The same watch cost three times as much at retail one day than it did the day prior.

prof1le(10000) 5 days ago [-]

There were mentioned studies in the book 'Influence: The Psychology of Persuasion' that mirrored what your parents experienced.

Long story short, a jeweler was trying to move some turquoise and told an assistant to sell them at half price while she was gone. The assistant accidentally doubled the price, but the stones still sold immediately.

Turns out there's a phenomenon where humans automatically associate price to quality. So getting charged more means we think we're getting better quality, regardless of the actual quality

gus_massa(1462) 4 days ago [-]

This reminds me one of my favorite comments of patio11 https://news.ycombinator.com/item?id=4477088

>> Frew, who apprenticed with a Savile Row tailor, can — all by himself, and almost all by hand — create a pattern, cut fabric and expertly construct a suit that, for about $4,000, perfectly molds to its owner's body. In a city filled with very rich people, he quickly had all the orders he could handle.

> You don't have to be Wall Street to figure out the bleedingly obvious solution to being a starving artist who has so much work they have to turn work away. Raise the prices. Then raise the prices. Then when you're done with that, raise the prices.

> At some point you'll be too expensive for the typical businessman, which will make you absolutely crack for a certain type of person common in New York, thus defeating all efforts at being less busy. So it goes. I guess you will have to raise prices.

otakucode(10000) 4 days ago [-]

I've a friend who has run his own small IT business for a bit over a decade. When I talk to him and he mentions how much he charges, I always tell him it is far too low. He sees it as being easy for him, so he doesn't think he should charge much. I've tried to explain to him that when the plumber comes over, you're not loading that guy down with quantum physics work... he knows how to unclog your drain or run a new water line. That's easy for him. He charges a high amount because his services are valuable. And I, for one, am happy to pay that plumber the high amount. It saves me having to invest far larger amounts in learning and tooling up to do it myself.

I've worked alongside him a few times on projects that his clients had which required coding work alongside the hardware and sysadmin stuff, and each time I've had to badger him into charging double or more than what he wanted to charge. And of course the customers paid for it, because it was still a good deal and I could show them a conservative estimate that said the system would pay for itself in savings in under 2 years. When freelancing, those are my favorite contracts. Where you can show to the customer up-front that you will be saving them money in the long run. It's always much easier to sell them at that point, and I think the amount its going to save is a pretty good proxy for the value of the work.

antt(10000) 4 days ago [-]

If you want to be paid more: ask for more. It is astonishing how few people do.

stronglikedan(10000) 5 days ago [-]

But also, charge accordingly, if you don't have to post your prices (as your parents probably did). A lawyer is likely willing to pay a lot more for the same website than a tailor shop, and I'll gladly take both of their money.

Bokanovsky(4125) 5 days ago [-]

This is a classic example of a Veblen good [1].

'Veblen goods are types of luxury goods for which the quantity demanded increases as the price increases, an apparent contradiction of the law of demand, resulting in an upward-sloping demand curve. Some goods become more desirable because of their high prices.'

The suits are expensive, so they must be good. It's also a status signal to others that you can afford such goods. (Edit minor typo).

[1] https://en.wikipedia.org/wiki/Veblen_good

conanbatt(4129) 4 days ago [-]

Its not the same customers.

Welcome to marketing!

biztos(4006) 4 days ago [-]

I'm not sure this is totally applicable to say a tailor shop, but I do wish more craftspeople would try charging more money for quality work.

For example say I want some shelves built. If my shelf-builder charges $X that's fine, but what if I would happily pay $2X?

On the one hand, I might be personally miffed to know I'm paying 2X instead of the X someone else is paying. On the other hand, I'd probably be the more satisfied customer if I don't know (or have the discipline to ignore) the price gap, because the shelf-builder is probably going to give extra effort in hope of getting more jobs from the 2X clientele.

It would be interesting to explore what would make your 2X client feel good even if they know they are paying double. And would that scale to 4X or 40X clients?

mywittyname(10000) 5 days ago [-]

If you respect yourself, then you'll respect your work.

bparsons(3996) 4 days ago [-]

It is a rule of thumb in any client or contact work that the less you are charging them, the more difficult and demanding the customer will be.

EGreg(1721) 4 days ago [-]

Are there any examples of this but with virtual things? Sites, apps, digital goods? Some status thing?

I can understand when it comes to physical goods.

dcl(10000) 4 days ago [-]

I had worked for a big bank doing financial modelling, reporting, random ad-hoc stuff for about 3.5 years before getting fairly bored. I quit and went to work for a small AI/ML* consulting firm whose biggest client turned out to be the same bank I used to work for.

Before I left, I gave my boss an opportunity to beat the new salary that was offered to me. He said he couldn't.

My annual pay was ~170k at the bank, but now they pay $2500 a DAY for me to be there, and, as a consultant, I work FAR less hard and far fewer hours (albeit in a completely different division). I've been here for 10 months now and it seems like it will continue indefinitely...

This article has strongly reminded me to start doing my own thing and charge even more.

*In reality, all I do is data engineering/munging because their data models and systems are so poor.

bitcoinmoney(10000) 4 days ago [-]

2500$/day before tax that's nice. Teach me your ways master... 1. Are you phd from a top school or boot camp grad?

2. Phd related to AI?

aequitas(4116) 5 days ago [-]

A valuable lesson I learned at a young age from one of my first 'customers' when doing computer repair jobs for friends and neighbours was is that you don't get paid for what you can or do but for the value you add or time (money) you save someone.

Most of the time the problems I had to solve where easy ones. Install printer, update software, remove toolbars, email settings, etc. All 5 minute work jobs, seldom totalling to more than an hour or 2 an evening. Not even worth asking money for in my opinion, because it was so easy en quick for me to do. But my neighbour always insisted I accept his money. Because for him, having to solve these issues himself would cost him multiple evenings. So the money he was giving me, which felt like too much to me, was stil a bargain for him. Also in his words it was easy for me because I had spend years in training to acquire this knowledge, or as others might call it: wasting your time behind that computer playing video games.

nostalgk(10000) 4 days ago [-]

Not sure I quite understand the mechanics behind it as well, but I've been through a similar training program and it appears to be effective.

twic(3453) 4 days ago [-]

> it was easy for me because I had spend years in training to acquire this knowledge

'knowing where to tap':


wallace_f(1214) 4 days ago [-]

That's lucky. When I sas 19, around the height of IE and 'every Windows box with malware,' I tried to start a similar business.

Probably my most wealthy client, with an oversized SUV in the driveway, had a home computer rendered unusable by malware--something I had fixed before. But she ended up asking me to leave. She decided she wanted 'real professionals' working on her computer. She told me I didn't know what I was doing, saying 'are you really going to fix it for this much or do you actually need to just start searching the web for what you are doing? (she saw that I had performed a google search...).'

It took me way, way too long in life to learn most of it is just perception and learning how to manage dealing with less-than-ideal people.

Wistar(10000) 4 days ago [-]

The (probably apocryphal) Picasso napkin story:

Picasso is sitting in a Paris cafe when a fan approaches the artist and asks that he make a quick sketch on a paper napkin. Picasso acquiesces, draws his dove and promptly hands it back to his admirer along with an ask for a rather large sum of money. The fan is flummoxed. "How can you ask for so much? It took you only a minute to draw this." To which Picasso replies, 'No, it took me 40 years.'

jamesb93(10000) 5 days ago [-]

So much waste - I'm not saying he didn't earn his money but how is it possible for a company to run cash positive when it wastes like that?

mbreedlove(10000) 5 days ago [-]

If they can capture $1mm revenue from the website, then the $21k they paid for it is a great investment.

beager(10000) 5 days ago [-]

I've realized that a lot of companies settle into enormous tailwinds, and the mission of the company then becomes 'don't screw it up'.

I've worked my entire career in startups and venture-backed companies. I cannot fathom the level of sloth, but I'm sure when I'm older I'll crave it.

irrational(10000) 5 days ago [-]

I work for a Fortune 100 company, and I've been asking myself this exact question nearly every day for the past 18+ years. The amount of money that is wasted is astonishing. $18,000 for a static html page is so little that it's almost laughable. We think it's crazy, because we have a clue what it actually takes to build the web page. But, the people who pay the bills are not technical and have no clue, so they would happily pay $180,000!

rocky1138(673) 5 days ago [-]

RIM would like to know your location

pornel(3318) 5 days ago [-]

Large companies operate on a completely different scale of money. If this was a page for a product that's going to make a million dollars, this expense was still a rounding error.

setr(10000) 5 days ago [-]

It's really easy: you just make bigger margins

Software particularly has stupidly big margins (and stupidly big overhead) when you're talking companies the size of oracle and microsoft

dzhiurgis(4035) 5 days ago [-]

I'm in midst of a contract that was supposed to take 2-3 weeks - my mate asked for help, they desperately need good developers.

I think it's 6 months now ($60k++), while past 3 were 'we are almost there now'. It's all typical crap - trying to squish some crap into JIRA, molesting Slack in some weirdest ways, tester reporting bugs in less than 6 sentences... All I can think of is that I didn't ask for a high enough hourly rate and I want this over ASAP.

davinic(10000) 5 days ago [-]

If you've signed a contact for a fixed period, demand that they raise your rate at each renewal. Don't give them the entire benefit of your flexibility without some concession on their end. If they want you month-to-month, your rate changes month-to-month.

rocky1138(673) 5 days ago [-]

This is how corporate works. They have budgets for things. The money doesn't come out of the pocket of the person who cuts the cheque. You send in an invoice and it gets paid. If it doesn't, the company is insolvent or they are at risk to lawsuits which will cost them more than your paltry $18000. But they don't even think about the lawsuit part. Bill comes in, cheque goes out.

wjnc(4118) 5 days ago [-]

Can you give my boss the 101? He has me methodically checking invoices against tenders. I would actually argue the other way. As our consultants rely on our return business, they would quickly solve any problem with the invoice even if not to the letter correct. We have bargaining power and they don't want to bite the hand that feeds them.

derefr(3593) 5 days ago [-]

How do they prevent paying random people who decide it'd be funny to send them an invoice?

buttcoinslol(3855) 5 days ago [-]

At my company, AP matches every invoice against a PO, and project managers/department heads are also responsible for approving invoices to be paid, so there are two chances to catch fake invoices or wrong amounts, etc.

gk1(124) 5 days ago [-]

Great illustration of why hourly billing makes no sense, for either side.

In this case, as usual, the amount of hours 'spent' on the project has little to do with actual value provided.

From the contractor's side, he's excited about getting 12x the original quote, instead of realizing he's been severely undercharging for his work and could've been earning 10x or more all this time. I wonder if the author will start charging appropriately for the value he's providing, or if he'll consider this a fluke and continue with $75/hour.

Phrased another way... How many times did you complete a project within the estimated time and get paid $1,500, when actually the company would have been glad to pay $18,000?

Last week there was a thread about consulting tips. I couldn't believe how many people were arguing for hourly billing. One person was even proud of billing by the minute! I hope those people see this story and realize what they're leaving on the table.

I have similar stories to this, where work got delayed due to issues on client's end. One time, I spent a month doing nothing while the client was dealing with something, which later turned out to be a big acquisition. If I billed by 'hours worked' then I'd get nothing, but because I had a monthly retainer I still got paid.

Edit: I'm not advocating 'fixed price.' I'm advocating monthly retainers.

aplummer(4112) 5 days ago [-]

Hourly billing is how you protect yourself against a sloppy client like this, especially with unpredictability. I have seen fixed price blow up so much I would never ever do it personally - I would have to add so much it would be astonishing to be worth it.

noir_lord(4039) 5 days ago [-]

One of the first web systems I ever put into production (still in use other than modernization I went 5 years without a bug report which still astounds me) after I jumped from desktop to web I charged £1500 for (it was about 20 hours actual work the rest was spent learning the right way to do things), did the job over 6 weeks.

Client let slip they'd been quoted £11,000 for it and 3mths.

Next job they asked me to do was £5,000 and I said 3mths (it took less than a month and I was working full time).

I learnt that lesson fast, don't charge what it's worth to you, charge what its worth to them.

Or as an ex-boss pithily put it 'serious people charge serious money'.

himynameisdom(10000) 5 days ago [-]

GK, I remember your presence on that thread. For those who were not there, it's under comment history. You spoke to the point of people needing your help (and not just anybody), which is kind of ridiculous in this case and the case from last week. You're opining value-driven fixed rate wisdom on a thread about a simple HTML page. If you dictate the pace of a specialty, of course you can charge whatever you want and call it 'value-driven.' For the other 99% of consultants out there, supply and demand bring pricing to an equilibrium. Retainers need not apply here nor in 90% of first-time client engagements.

Consultants who win the job will always leave money on the table. That's how you get the gig to begin with. Value is relative, and it takes at least two to tango come contract time until cost (what the client sees) and value (what you see) intersect.

Billing hourly brings pricing transparency clients want while protecting you from true under-utilization. If you're not being utilized because of red tape that was unassumed at contract time, a change order is the logical next step to account for scope remaining/additional scope.

dymk(10000) 5 days ago [-]

I'm gonna disagree that hourly billing makes no sense for the contractor. He clearly just didn't charge enough per hour; $75/hour is way too low for contract work, and his $21k could have been $56k at $200/hour.

What if the project was billed at a fixed cost, he negotiated $21k, but the project took a year to complete because the company moved so slowly? That'd be a terrible salary. He'd have to quit and somehow bill even with no deliverables. How hairy would a contract covering that be to defend when you sue?

osrec(3233) 5 days ago [-]

You should try working in an investment bank as a contractor. You need to find the right team, but a lot of them are massive collections of people doing virtually nothing but getting paid great daily rates. The trick is to look busy and wrap your team/yourself in a perceived sense of enigma and complexity. If you play your cards right, you can end up in a situation where no one will ask questions as long as you fire off an email now and again. You can keep getting paid for doing almost nothing for years.

mprev(3594) 4 days ago [-]

Ugh, but who wants to do nothing all day? You get one life; why waste your days?

solotronics(10000) 5 days ago [-]

I have a theory that in most large companies the 80/20 rule applies. 20% of the engineers do 80% of the work.

paganel(2546) 5 days ago [-]

The George Costanza way of doing business. The more I grow less young (I'm close to 40 now) the more I realize that the Costanza character is one of the closest approximations of daily life in a society like ours. Dilbert or "The Office" TV series are also very good fits but George Costanza is in a world of his own.

linuxftw(10000) 5 days ago [-]

This describes >80% of engineering departments at past large firms where I've worked. Ironically, these people are 'architects' and get paid handsomely.

stefek99(4057) 3 days ago [-]

The trouble with bank that I had - most of the websites were firewalled.

I was gardening various projects on GitHub and increasing my StackOverflow reputation.

I wasn't enjoying it.

At some level there is a need for accomplishment.

Poor work ethics become a virus, difficult to concentrate at home later.

koala_man(4098) 5 days ago [-]

As a teenager I once got paid a flat rate of $300 for what turned out to be a ten minute job with Excel vlookups.

Funny that my hourly rate peaked at 17.

lixtra(4112) 5 days ago [-]

You probably saved the company 1h of work per week for some years. Finding another expert would have taken them half a day of searching. So it made economic sense from their part as well. Did you learn to seek such situations to profit?

jbob2000(10000) 5 days ago [-]

Pfft that's nothing. We just had a contractor charge us $50k to setup a simple HTTP proxy. I looked at the code; it's a .NET starter app with our endpoints in it that route to other endpoints. $50k and it probably took them 2 hours, which was probably mostly just packages downloading.

BonesJustice(10000) 5 days ago [-]

Indeed, $50k for a few hours of work is a lot better (for the contractor) than the $21k for 280 hours chronicled in the original post.

Hell, considering probably he spent most of his time sitting around, bored out of his mind, and had to commute 50 miles each day, I would have passed on the contract.

holler(10000) 5 days ago [-]

did you pay it? that's crazy!

vokep(10000) 5 days ago [-]

Reading stuff like this makes me wonder if I'm not missing out, having a salaried job. :/

S_A_P(4099) 5 days ago [-]

I don't find this to be beyond belief, but there are a few things the author should have done differently.

1) Notify when hours were exceeded 2) Get written notification that he was still required to come in to the office while waiting for assets or otherwise at a blocker 3) Ask questions to further cement the requirements 4) pick up a phone?

I think that ethically this was not a great move on this persons part, but we live and learn, and hopefully they did learn from the experience.

Large companies have budgets that usually are 'use it or lose it', so the ROI doesnt really matter most of the time. Secondly, large companies are less likely have 'gate keeper' folks ensuring that there are not wasting hours when the timescale is less than one month. As costs escalate and budgets get blown, that is when they thin out the contractors.

AceJohnny2(4031) 5 days ago [-]

> I think that ethically this was not a great move on this persons part, but we live and learn, and hopefully they did learn from the experience.

Sounds like the author reluctantly learned the opposite lesson, that ethics are silly and big companies will happily overpay you. He mentions multiple times his qualms about the whole situation.

analogmemory(10000) 5 days ago [-]

I mean they were pretty clear they didn't care about the money. Emailing daily to check in on progress is good enough. I've been in a similar situation. The manager isn't really concerned about the money. They just need someone to justify their expenses.

> We need your full undivided attention to complete this project. For the duration of the contract, you will work exclusively with us to deliver result in a timely manner. We plan to compensate you for the trouble.

duxup(3882) 5 days ago [-]

I know someone working as a contractor for a big company.

1 year contract.

6 months have gone by and they've done ... nothing. Their boss keeps saying 'Don't worry we'll get to you, we're just swamped right now, you're good.'

They're already talking about extending the contract.

tomduncalf(1139) 5 days ago [-]

Yeah this happens more than you'd expect! Never had it myself but I've worked st places where it's happened to colleagues (usually assigned unimportant BAU work than actually doing nothing but I do know of people who've basically had nothing to do!)

yibg(10000) 4 days ago [-]

I had a friend working on contract for the local city government. The contract was I believe for 2 years. A year in the project got cancelled but since his contract has already been signed they just kept paying him with no work for him to do. He just went in (because of course he still had to show up for some reason), and played games all day. Now I partially understand why so many government projects go over budget.

55555(4082) 4 days ago [-]

Yeah I know a guy who had that waiting gig for 1.5 years before he was finally assigned his first bit of work.

AchieveLife(10000) 5 days ago [-]

That's why I offer a retainer package. Get paid for just being available when needed and work a non-conflicting contract elsewhere.

at-fates-hands(3568) 5 days ago [-]

I had a similar experience a few years ago.

I quit at a company I was contracting at because they kept dangling the whole, 'We're going to convert you to an FTE next.' in the meantime, I was working less than 20 hours a week. If you didn't have a project to bill hours to, you didn't get paid, period. I was floating between teams, fixing bugs and doing minor stuff, not being able to bill much of anything. Once I quit I was offered another contract role. I basically told the recruiter, 'Listen, if I'm in the office, I'm getting paid for my time, period.' Recruiter got it cleared with HR and the hiring manager.

My first day went like this:

Manager: 'Ummm yeah, the two major projects we had you slated on, ummmm those got put on hold for the time being. Get your desk and PC setup and we'll have something for you soon.'

I literally went 4 months and barely billed any real project work. My last two weeks I had 36 hours of non-billable time. I had two weeks where I actually billed a full weeks worth when a dev took off for his honeymoon and did exactly zero work he was assigned. The funny part is when I quit, the hiring manager told me he would hire me in a minute and to keep in contact.

In the meantime, I was able to learn AngularJS and some other stuff while I was sitting at my desk all day. In a sense, I was very productive when I was there.

sharkweek(894) 5 days ago [-]

Big companies (rather, people at big companies) WANT to spend money on this kind of stuff for all sorts of reasons.

-A team might have use-it-or-lose-it budget, so they have to spend it on something, and a contractor might be the lucky recipient!

-Tax purposes!

-Spending a lot on a contractor gives them someone to 'fire' when they need to explain why something wasn't getting done or something went poorly!

The list goes on!

All that being said, as a consultant myself, I consider those types of projects windfall, as they tend to be the ones that end abruptly. It's kind of a scary feeling getting paid without actual work to do. I have found I 100% prefer the projects where there are clear tasks, goals, and results to report, if for nothing else than my own sanity.

blunte(4113) 5 days ago [-]

I've lived this. Myself and a very expensive team of EY kids were waiting eagerly every day for anyone in corp management to throw us any kind of tasks.

On the rare occasion that we were given a task, we would all descend upon one computer like vultures, group-solving the problem typically in 60minutes or less. Then it was back to doing nothing.

Diplomacy (the game) became our primary activity. It was fun, but such a terrible waste of time, talent, and money.

maxxxxx(4001) 5 days ago [-]

Happens all the time. We can hire only at certain times when budgets open, so you go ahead and hire but you have no time to deal with the person. It's better to have the person sit around until you have time because the alternative is losing the contractor and not being able to hire when you really need someone.

Stupid but very real. I always find it funny that wasting 100k this way is perfectly fine but a 5k raise is almost impossible.

misiti3780(846) 5 days ago [-]

i worked for lockheed martin 10 years ago (give or take) on a client site. I was W2 (although I wish i was 1099). Anyways, when the government changed from W -> Obama a lot of the DOD contracts were changing because everyone anticipated the Obama was going to cut defense spending (which he did). The project found out that the DOD was not going to be able to re-fund the project, but we had to continue to the end. I ended up forced to 'work from home' for about six months until the contract ran out. I legally couldnt get another job, but I went on vacation for a month or two. So I calculate I got paid 1/2 my salary at the time + benefits for doing exactly nothing.

NightlyDev(10000) 4 days ago [-]

If that happens then you're definetly charging too little. In a lot of places you can't actually just work for just one client for an extended time as a consultant or freelancer as you then might just as well be an employee.

netwanderer3(10000) 4 days ago [-]

I know a couple of friends who worked at the major banks. They all started out as contractors. I remember one guy told me during their first year that all they did was browsing Reddit at work. He was a close friend and didn't want me to join their team because he knew I would display the level of work ethic that would make him look bad.

bryanrasmussen(425) 5 days ago [-]

I've worked at pretty big companies and the only way I've ever gotten away with not producing any code was because I was going through onboarding hell. At the worst 2 weeks to get onboarded.

dv_dt(4059) 4 days ago [-]

Sometimes having resources on standby, ready to start really is the fastest way to get through a project (sometimes it is just waste though)...

biztos(4006) 4 days ago [-]

I've seen the same thing, off and on, for over 20 years.

Employee or contractor gets stuck somewhere with nothing much to do... speaks to manager about it repeatedly... gets the 'just find something to do, we'll get to you' speech... fails (often despite good-faith effort) to find anything useful to do... and eventually gives up.

Worst case I personally witnessed was a quite talented dev going six months without any actual project, then another couple before quitting.

Another case was a guy who tried to use his abundant free time to learn other skills but mostly ended up playing Myst, which proves this phenomenon is as old as dirt. :-). (He ultimately gave up the Myst Gig and quit, I'm sure to the consternation of his manager who probably lost a head-count over it.)

llamataboot(1267) 5 days ago [-]

Ha ha cute! Meanwhile that's about half of what we pay teachers for a full year of work tending to the emotional and intellectual well-being of children.

fishstick2000(10000) 5 days ago [-]

u mad

blunte(4113) 5 days ago [-]

Don't hate the player, hate the game. And if you want to compare income, look at C-level executives... don't look at devs who are actually doing work.

himynameisdom(10000) 5 days ago [-]

This is misguided anger at best, and trolling at worst. What were you hoping to achieve with this?

ycombonator(2020) 5 days ago [-]

This is how your tax payer money is wasted in State & Federal Gov IT agencies. I spent an entire year doing nothing in a Gov agency and in the end I was so bored that I quit. Fannie & Freddie hires tons of non citizen constractors with usually no work. It's a big carnival.

WaxProlix(4130) 5 days ago [-]

It's all big orgs, everywhere. Seen a lot of it even at hip modern BigCorps. Leave your axe and grinder at home :)

Historical Discussions: Going Critical (May 14, 2019: 978 points)

(982) Going Critical

982 points 6 days ago by bkudria in 453rd position

www.meltingasphalt.com | Estimated reading time – 29 minutes | comments | anchor

Going Critical

by Kevin Simler

If you've spent any time thinking about complex systems, you surely understand the importance of networks.

Networks rule our world. From the chemical reaction pathways inside a cell, to the web of relationships in an ecosystem, to the trade and political networks that shape the course of history.

Or consider this very post you're reading. You probably found it on a social network, downloaded it from a computer network, and are currently deciphering it with your neural network.

But as much as I've thought about networks over the years, I didn't appreciate (until very recently) the importance of simple diffusion.

This is our topic for today: the way things move and spread, somewhat chaotically, across a network. Some examples to whet the appetite:

  • Infectious diseases jumping from host to host within a population
  • Memes spreading across a follower graph on social media
  • A wildfire breaking out across a landscape
  • Ideas and practices diffusing through a culture
  • Neutrons cascading through a hunk of enriched uranium

A quick note about form.

Unlike all my previous work, this essay is interactive. There will be sliders to pull, buttons to push, and things that dance around on the screen. I'm pretty excited about this, and I hope you are too.

So let's get to it. Our first order of business is to develop a visual vocabulary for diffusion across networks.

A simple model

I'm sure you all know the basics of a network, i.e., nodes + edges.

To study diffusion, the only thing we need to add is labeling certain nodes as active. Or, as the epidemiologists like to say, infected:

This activation or infection is what will be diffusing across the network. It spreads from node to node according to rules we'll develop below.

Now, real-world networks are typically far bigger than this simple 7-node network. They're also far messier. But in order to simplify — we're building a toy model here — we're going to look at grid or lattice networks throughout this post.

(What a grid lacks in realism, it makes up for in being easy to draw ;)

Except where otherwise specified, the nodes in our grid will have 4 neighbors, like so:

And we should imagine that these grids extend out infinitely in all directions. In other words, we're not interested in behavior that happens only at the edges of the network, or as a result of small populations.

Given that grid networks are so regular, we can simplify by drawing them as pixel grids. These two images represent the same network, for example:

Alright, let's get interactive.

The network below has playback controls at the bottom. Press the ▷ button to watch the activation spread, or step through one moment at a time:

In this simulation, an active node always transmits its infection to its (uninfected) neighbors.

But this is dull. Far more interesting things happen when transmission is probabilistic.


In the simulation below, you can vary the transmission rate using the slider at the bottom:

This is what's called an SIR model. The initials stand for the three different states a node can be in:

  • Susceptible
  • Infected
  • Removed

Here's how it works:

  1. Nodes start out as Susceptible, except for a few nodes (like the center node above) which start as Infected.
  2. At each time step, Infected nodes get a chance to pass the infection along to each of their Susceptible neighbors, with a probability equal to the transmission rate.
  3. Infected nodes then transition to the Removed state, indicating that they're no longer capable of infecting others or being infected again themselves.

In a disease context, Removed may mean that the person has died or that they've developed an immunity to the pathogen. Regardless, we say that they're 'removed' from the simulation because nothing ever happens to them again.

Now, depending on what we're trying to simulate, we may need something other than an SIR model.

If we're simulating the spread of measles or an outbreak of wildfire, SIR is perfect. But suppose we're simulating the adoption of a new cultural practice, e.g., meditation. At first a node (person) is Susceptible, because they've never done it before. Then, if they start meditating (perhaps after hearing about it from a friend), we would model them as Infected. But if they stop practicing, they don't die or drop out of the simulation, because they could easily pick up the habit again in the future. So they transition back to the Susceptible state.

This is an SIS simulation — which (you guessed it) stands for Susceptible–Infected–Susceptible. Here's what it looks like on a grid:

[+] Implementation details


As you can see, this plays out very different from the SIR model.

Because the nodes never get used up (Removed), even a very small and finite grid can sustain an SIS infection for a long time. The infection simply hops around from node to node and eventually back again.

Despite their differences, SIR and SIS turn out to be surprisingly interchangeable for our purposes today (namely: developing intuition). So we're going to anchor on SIS for the remainder of this essay — mostly because it dances around longer and is therefore more fun.

Going critical

Now, in playing with the simulations above — both SIR and SIS — you may have noticed something about the longevity of the infection.

At very low transmission rates, like 10 percent, the infection tends to die out. Whereas at higher values, like 50 percent, the infection remains alive and takes over most of the network. If the network were infinite, we could imagine it continuing on and spreading outward forever.

This limitless diffusion has many names: 'going viral' or 'going nuclear' or (per the title of this post) going critical.

It turns out that there's a precise tipping point that separates subcritical networks (those fated for extinction) from supercritical networks (those that are capable of neverending growth). This tipping point is called the critical threshold, and it's a pretty general feature of diffusion processes on regular networks.

The exact value of the critical threshold differs between networks. What's shared is the existence of such a value.

Here's an SIS network to play around with. Can you find its critical threshold?


By my tests, the critical value seems to be between 22 and 23 percent.

Show spoilers

At 22 percent (and below), the infection eventually dies out. At 23 percent (and above), the initial infection occasionally dies out, but on most runs it manages to survive and spread long enough to survive forever.

(By the way, there's an academic cottage industry devoted to finding these critical thresholds for different network topologies. For a taste, I recommend a quick scroll down the Wikipedia page for percolation threshold.)

In general, here's how it works: Below the critical threshold, any finite infection on the network is guaranteed (with probability 1) to eventually go extinct. But above the critical threshold, it's possible (p > 0) for the infection to carry on forever, and in doing so to spread out arbitrarily far from the initial site.

Note, however, that an infection on a supercritical network isn't guaranteed to go on forever. In fact, the infection will frequently fizzle out, especially in the very early steps of the simulation.

To see this, suppose we start with a single Infected node and its 4 neighbors. On the first step of the simulation, the infection has 5 independent chances to spread (including the chance to 'spread' to itself on the next time step):

Now suppose the transmission rate is 50 percent. In that case, the first step of the simulation amounts to doing 5 coin flips. And if they all come up tails, the infection will be extinguished. This happens about 3 percent of the time — and that's just on the first step. An infection that survives the first step will then have some (typically smaller) probability of going extinct on the second step, and some (even smaller) probability of dying on the third step, etc.

So even when the network is supercritical — even if the transmission rate is 99 percent — there's a chance that the infection will fizzle out.

But the important thing is that it won't always fizzle out. When you add up the fizzle-probabilities for all the steps out to infinity, the result is less than 1. In other words, with nonzero probability, the infection carries on forever. This is what it means for a network to be supercritical.

SISa: spontaneous activation

Up to this point, all our simulations have started with a little nugget of preinfected nodes at the center.

But what if we decide to start with nothing? Then we would need to model spontaneous activation — the process by which a Susceptible node randomly becomes Infected (without catching the infection from one of its neighbors).

This has been dubbed the SISa model. The 'a' stands for 'automatic.'

Below you can play with an SISa simulation. There's a new parameter, the spontaneous activation rate, which changes how often a spontaneous infection will occur. (The transmission rate parameter, which we saw earlier, is also present.)

What does it take to get the infection to spread across the whole network?


As you may have noticed, increasing the rate of spontaneous activation doesn't change whether or not the infection takes over the network. Instead, in this simulation, it's only the transmission rate that determines whether the network is sub- or supercritical. And when the network is subcritical (trans. rate ≤ 22%), no infection can catch on and spread, no matter how frequently it happens.

This is like trying to start a fire in a wet field. You might get a few dry leaves to catch, but the flame will quickly die out because the rest of the landscape isn't flammable enough (subcritical). Whereas in a very dry field (supercritical), it may only take one spark to start a raging wildfire.

We can see similar things taking place in the landscape for ideas and inventions. Often the world isn't ready for an idea, in which case it may be invented again and again without catching on. At the other extreme, the world may be fully primed for an invention (lots of latent demand), and so as soon as it's born, it's adopted by everyone. In-between are ideas that are invented in multiple places and spread locally, but not enough so that any individual version of the idea takes over the whole network all at once. In this latter category we find e.g. agriculture and writing, which were independently invented ~10 and ~3 times respectively.


Suppose we make some nodes completely resistant or "immune" to activation. This is like putting them in the Removed state, then running SIS(a) on the remaining nodes.

You can play with it below. The 'Immunity' slider controls the percentage of nodes that are Removed. Try varying the slider (while the simulation is running!) to see its effect on whether the network is supercritical or not:


Changing how many nodes are immune absolutely changes whether the network is sub- or supercritical. And it's not hard to see why. When many nodes are immune, each infection has fewer opportunities to spread to a new host.

This turns out to have a number of very important practical implications.

One is preventing the spread of wildfires. Now, individuals should always take their own local precautions (e.g., never leaving an open flame unattended). But at a larger scale, small outbreaks are inevitable. So another mitigation technique is to ensure there are enough 'gaps' (in the network of flammable materials) that an outbreak can't take over the entire network. Thus firewalls and firebreaks:

Another outbreak that's important to stop is infectious disease. Enter here the concept of herd immunity. This is the idea that, even if some people can't be vaccinated (e.g., because they have compromised immune systems), as long as enough people are immunized, the disease won't be able to spread indefinitely. In other words, vaccinating enough people can bring a population down from supercritical to subcritical. When this happens, a single patient might still catch the disease (e.g., by traveling to another region and then back home), but without a supercritical network in which to grow, the disease won't infect but a small handful of people.

Finally, we can use the concept of immune nodes to understand what happens in a nuclear reactor. In a nuclear chain reaction, a decaying uranium-235 atom releases ~3 neutrons, which trigger (on average) more than 1 other U-235 atoms to split. The new neutrons that are released then trigger further atoms to split, and so on exponentially:

Now, when making a bomb, the whole point is to let the exponential growth proceed unchecked. But in a power plant, the goal is to produce energy without killing everyone in the neighborhood. For this we use control rods, which are made from material that can absorb neutrons (like silver or boron). Because they absorb rather than release neutrons, control rods act like immune nodes in our simulation above, thereby preventing the radioactive core from going supercritical.

The trick to running a nuclear reactor, then, is to keep the reaction hovering just at the critical threshold, while making EXTRA! SPECIAL! SURE! that whenever anything goes wrong, the control rods slam into the core and put a stop to it.


The degree of a node is the number of neighbors it has. Up to this point, we've been looking at networks of degree 4. But what happens when we vary this parameter?

For example, we might connect each node not only to its 4 immediate neighbors, but also to its 4 diagonal neighbors. In such a network, the degree would be 8.

You can play with this parameter below:

[+] Implementation details


Again, it's not hard to understand what's happening here. When each node has more neighbors, there are more chances for an infection to spread — and thus the network is more likely to go critical.

This can have surprising implications, however, as we'll see below.

Cities and network density

Up to now, our networks have been completely homogeneous. Every node looks the same as every other node. But what if we subvert this assumption and allow things to vary across the network?

For example, let's try to model cities. We'll do this by creating patches of the network that are denser in connections (have higher degree) than the rest of the network. This is motivated by data that suggests that people in cities have wider social circles and more social interactions than people outside cities.

In the simulation below, we color Susceptible nodes based on their degree. Nodes out in the 'countryside' have degree 4 (and are colored in light gray), whereas nodes in the 'cities' have higher degrees (and are colored corresponingly darker), starting at degree 5 on the outskirts and culminating at 8 in the city center.

Can you get the initial activation to spread to the cities, and then remain only in the cities?


I find this simulation both obvious and surprising at the same time.

Of course cities can support more culture than rural areas — everyone knows this. What surprises me is that some of this cultural variety can arise based on nothing more than the topology of the social network.

This is worth dwelling on, so let me try to explain it more carefully.

What we're dealing with here are forms of culture that get transmitted simply and directly from person to person. For example, manners, parlor games, fashion trends, linguistic trends, small-group rituals, and products that spread by word of mouth — plus many of the packages of information we call ideas.

(Note: Person-to-person diffusion is complicated tremendously by mass media. So as you're thinking about these processes, it may help to imagine a more technologically primitive environment, e.g., Archaic Greece, where almost every scintilla of culture is transmitted during meatspace interactions.)

What I learned from the simulation above is that there are ideas and cultural practices that can take root and spread in a city that simply can't spread out in the countryside. (Mathematically can't.) These are the very same ideas and the very same kinds of people. It's not that rural folks are e.g. 'small-minded'; when exposed to one of these ideas, they're exactly as likely to adopt it as someone in the city. Rather, it's that the idea itself can't go viral in the countryside because there aren't as many connections along which it can spread.

This is perhaps easiest to see in the domain of fashion — clothing, hairstyles, etc. In the fashion network, we might say that an edge exists whenever two people notice each other's outfits. In an urban center, each person could see upwards of 1000 other people every day — on the street, in the subway, at a crowded restaurant, etc. In a rural area, in contrast, each person may see only a couple dozen others. Based on this difference alone, the city is capable of sustaining more fashion trends. And only the most compelling trends — the ones with the highest transmission rates — will be able to take hold outside of the city.

We tend to think that if something's a good idea, it will eventually reach everyone, and if something's a bad idea, it will fizzle out. And while that's certainly true at the extremes, in between are a bunch of ideas and practices that can only go viral in certain networks. I find this fascinating.

Not just cities

What we're exploring here are the effects of network density. This is defined, for a given set of nodes, as the number of actual edges divided by the number of potential edges. I.e., the percentage of possible connections that actually exist.

So, as we've seen, urban centers have higher network densities than rural areas. But cities aren't the only place we find dense networks.

High schools are an interesting example. Consider, in a given neighborhood, the network that exists among the students vs. the network that exists among their parents. Same geographic area and similar population sizes, but one network is many times denser than the other. And it's no surprise, then, that fashion and linguistic trends proliferate among adolescents, and spread much slower among the adults.

Similarly, elite networks are generally much denser than non-elite networks — an underappreciated fact, IMO. (People who are popular or powerful spend more time networking, and so they have more 'neighbors' than ordinary folks.) Based on the simulations above, we would expect elite networks to support some cultural forms that can't be supported by the mainstream, based on nothing more than the average degree of the network. I'll leave it to you to speculate on what these forms might be.

Finally, we can apply this lens to the internet, by choosing to model it as a huge and very densely networked city. Not surprisingly, there are many new kinds of culture flourishing online that simply couldn't be sustained in purely meatspace networks. Most of these are things we want to celebrate: niche hobbies, better design standards, greater awareness of injustices, etc. But it's not all gravy. Just as the first cities were a hotbed for diseases that couldn't spread at lower population densities, so too is the internet a breeding ground for malignant cultural forms like clickbait, fake news, and performative outrage.


"The attention of the right expert at the right time is often the single most valuable resource one can have in creative problem solving." — Michael Nielsen, Reinventing Discovery

We often think of discovery or invention as a process that takes place in the mind of a singular genius. A flash of inspiration strikes and — eureka! — suddenly we get a new way to measure volume. Or the equations for gravity. Or the lightbulb.

But taking the perspective of the lone inventor and zeroing in on the moment of discovery is to take the node's eye view of the phenomenon. Whereas, properly construed, invention is something that happens on a network.

The network is important in at least two ways. First, preexisting ideas have to make their way into the mind of the inventor. These are the citations of a new paper, the bibliography section of a new book — the giants on whose shoulders Newton stood. Second, the network is crucial for getting a new idea back out into the world; an invention that doesn't spread is hardly worth calling an 'invention' at all. And so, for both of these reasons, it makes sense to model invention — or more broadly, the growth of knowledge — as a diffusion process.

In just a moment, I'll present a crude simulation of how knowledge might diffuse and grow within a network. But first I need to explain it.

At the start of the simulation, there will be 4 experts positioned in each quadrant of the grid, like so:

Expert 1 has the first version of the idea — let's call it Idea 1.0. Expert 2 is the kind of person who knows how to transform Idea 1.0 into Idea 2.0. Expert 3 knows how to transform Idea 2.0 into Idea 3.0. And finally, Expert 4 knows how to put the finishing touches on the idea to create Idea 4.0.

This might represent a craft (technê) like origami, in which techniques are elaborated and combined with other techniques to produce more interesting constructions. Or it might represent a field of knowledge (epistêmê) like physics, in which later work builds on more fundamental work established by earlier physicists.

The conceit of this simulation is that we need all four experts to contribute to the final version of the idea. And at each phase of development, the idea has to diffuse to the relevant expert.

Here's what it looks like in action:


This is a ridiculously simplified model of how knowledge actually grows. It leaves out a great many important details (see caveats above). Nevertheless, I think it captures an important essence of the process. And so we can, tentatively, use what we've learned so far (about diffusion) to reason about knowledge growth.

In particular, the diffusion model gives us intuition for how to speed things up: make it easier for expert nodes to share ideas. This might mean clearing out the dead nodes that get in the way of diffusion. Or it might mean putting all the experts in a city, where ideas percolate quickly. Or just get them in the same room together:

So... that's all I have to share with you about diffusion.

I have one last thought to share, however, and it's an important one. It's about the growth (and stagnation) of knowledge in scientific communities. This will be a departure in tone and content from everything above, but I hope you'll indulge me.

On scientific networks

The loop below, it seems to me, is among the most important positive feedback loops in the world (and has been for quite some time):

The upstroke of the loop (K ⟶ T) is reasonably straightforward: We use new knowledge to devise new tools. For example, understanding the physics of semiconductors enables us to build computers.

The downstroke, however, warrants some unpacking. How does technological growth lead to knowledge growth?

One way — perhaps the most direct — is when new technology gives us new ways to perceive the world. For example, better microscopes allow us to peer more deeply inside the cell, generating insight into molecular biology. GPS trackers show us where animals are moving. Sonar allows us to explore the oceans. And so on.

This mechanism is vital, no doubt, but there are at least two other paths from technology to knowledge. They may be less straightforward, but I think they're at least as important:

One. Technology leads to economic surplus (i.e., wealth), and more surplus, in turn, allows more people to specialize in knowledge production.

If 90 percent of your country is engaged in subsistence agriculture, and most of the remaining 10 percent are performing some form of commerce (or war), it doesn't leave a lot of people with the free time to ponder the laws of nature. Perhaps this is why most science in premodern times was done by the children of wealthy families.

Today, the US produces over 50,000 PhDs every year. Instead of getting a job at age 18 (or earlier), a PhD student must be subsidized well into their 20s and perhaps into their 30s — and even then, it's unclear that they'll produce anything of real economic value. But this is what's necessary to get people to the frontier of knowledge, especially in difficult domains like physics or biology.

Point is, from a systems perspective, specialists don't come cheap. And the ultimate source of the societal wealth which funds these specialists is new technology; the plow subsidizes the pen.

Two. New technologies, especially in travel and communication, change the structure of the social networks on which knowledge grows. In particular, it allows experts and specialists to network more tightly with one another.

Notable inventions here include the printing press, steamships and railroads (making it easier to travel and/or mail things over long distances), telephones, airplanes, and the internet. All of these technologies serve to increase network density, especially within specialist communities (which is where the vast majority of knowledge growth occurs). For example, the correspondence networks that arose among European scholars during the late Middle Ages, or the way modern physicists use arXiv.

Ultimately both of these pathways are similar. Both lead to a greater network density of specialists, which in turn leads to knowledge growth:

For years I've been fairly dismissive of academia. A short stint as a PhD student left a bad taste in my mouth. But now, when I step back and think about it (and abstract away all my personal issues), I have to conclude that academia is still extremely important.

Academic social networks (e.g., scientific research communities) are some of the most refined and valuable structures our civilization has produced. Nowhere have we amassed a greater concentration of specialists focused full-time on knowledge production. Nowhere have people developed a greater ability to understand and critique each other's ideas. This is the beating heart of progress. It's in these networks that the fire of the Enlightenment burns hottest.

But we can't take progress for granted. If the reproducibility crisis has taught us anything, it's that science can have systemic problems. And one way to look at those problems is network degradation.

Suppose we distinguish two ways of practicing science: Real Science vs. careerist science. Real Science is whatever habits and practices reliably produce knowledge. It's motivated by curiosity and characterized by honesty. (Feynman: 'I just have to understand the world, you see.') Careerist science, in contrast, is motivated by professional ambition, and characterized by playing politics and taking scientific shortcuts. It may look and act like science, but it doesn't produce reliable knowledge.

(Yes this is an exaggerated dichotomy. It's a thought exercise. Bear with me.)

Point is, when careerists take up space in a Real Science research community, they gum up the works. They angle to promote themselves while the rest of the community is trying to learn and share what's true. Instead of striving for clarity, they complicate and obfuscate in order to sound more impressive. They engage in (what Harry Frankfurt might call) scientific bullshit. And consequently, we might model them as dead nodes, immune to the good-faith information exchanges necessary for the growth of knowledge:

Perhaps a better model is one in which careerist nodes aren't just impervious to knowledge, but are actively spreading fake knowledge. Fake knowledge might include minor results that get hyped up and oversold, for example, or genuinely false results that arise from p-hacking or fabricated data.

But regardless of how we model them, careerists certainly have the potential to stifle our scientific communities.

It's like a nuclear reaction that we badly need — an explosion of knowledge — except that our enriched U-235 is salted with too much U-238, the nonreactive isotope that suppresses the chain reaction.

Of course, there's no categorical distinction between careerists and Real Scientists. We all have a little careerism in us. The question is just how much the network can carry before going quiet.


Oh hi, you made it all the way to end. Thanks for reading.

A quick request

If you're on Twitter and have a few minutes, I'd really appreciate some feedback on this post. I'm excited about this medium (prose + interactive widgets) and plan on doing more posts like this in the future. So I'd love to know what worked and what didn't. Please get in touch!


CC0 — no rights reserved. You're free to use this work however you see fit :).


  • Kevin Kwok and Nicky Case for their thoughtful comments and suggestions on various drafts.
  • Nick Barr for moral support throughout the process, and for some of the most helpful feedback I've ever been given on my work.
  • Keith A. for pointing me to percolation theory, a field that 'wouldn't know a proof if it bit them in the face.'
  • Jeff Lonsdale for the link to this essay, which (despite its many flaws) was my main impetus to work on this post.


Originally published May 13, 2019.

All Comments: [-] | anchor

soVeryTired(10000) 6 days ago [-]

This is a really interesting topic and the article is nicely written.

What bothers me, though, is the effort to link the mathematics to 'real-world' applications. I agree that forest fires, disease outbreaks, and the spread of ideas might be good candidates for this sort of modelling. But I think you'd need an awful lot of solid evidence to back that up.

crazygringo(3750) 6 days ago [-]

Without the real-world applications a lot of this could seem dry and boring or irrelevant.

Many years ago I went through teacher training, and one of the biggest things you learn is to always make material relevant to students by linking abstract concepts to real-world applications they actually care about.

It is true that in writing an article like this, you need to be very careful with your wording to distinguish between things that 'appear like', 'are similar to', 'suggest', or even 'is a first-order approximation of', versus stating that this is the model of epidemiology, forest fires, etc. (which needs citations, etc.). But in this particular article, the examples seem fairly straightforward as first-order approximations -- curious what sentences you're specifically objecting to?

r34(4105) 6 days ago [-]

Anyone can recommend JS toolset for crearting essay like this? I can see React components used, but maybe there is something less complex?

gnomewascool(4103) 6 days ago [-]

I haven't used it, but there's idyll[0], which was used, among other things, to make an excellent interactive article about the JPEG format[1].

[0] https://idyll-lang.org/

[1] https://parametric.press/issue-01/unraveling-the-jpeg/

swah(537) 6 days ago [-]

So a few anti-vaxxers aren't really that damaging and should be left alone...

MaxBarraclough(10000) 4 days ago [-]

You're trolling or joking, I presume, but on the off chance you aren't, do please explain your reasoning.

jsilence(4124) 6 days ago [-]

No, they should be burned with wildfire!

emilfihlman(4107) 6 days ago [-]

Why is the percolation threshold lower than 25% for a square lattice?

croddin(4129) 6 days ago [-]

Each node has a link to itself and can re-infect itself on the next iteration, so a node has 5 different nodes it can infect in the next iteration rather than 4.

lifeisstillgood(1667) 6 days ago [-]

'Instant classic' is exactly how I felt - I 'knew' all the parts of the article but it has a new context and direction that gives a moment of clarity.

Cannot recommend this enough.

And on a slightly related note a recent Talking Politics podcast had Sir David King who was UKs chief Scientific Officer to Blair. Early on there was a terrible Foot and Mouth outbreak and Blair was at a loss how to prevent the outbreak spreading from farm to farm. But King understood SIR / SIS networks (and experts in this who wrote the books) and said 'give me carte blanche and we will fix this - by day X we will see a tipping point' And the Army shut down every farm, and on day X the infections stopped and he had sufficient political capital to push hard on things like Paris climate treaty (which UK has a lot of impact on)

In other words understanding this article lead to a global first on CO2 reduction.

Science Works Bitches

joshdance(1843) 6 days ago [-]

link to that podcast?

perlgeek(2572) 6 days ago [-]

Very cool!

As a small translation aid, when physicists talk about criticality, they tend to talk about 'dimensions' instead of 'degree'.

In a one-dimensional system, a line, you can have at most two nearest neighbors, in a two-dimensional system 4, 3D has 6 neighbors, and so on.

Physicists have no trouble talking about fractional dimensions either, which can be realized in surfaces, fractal-like substances and so on.

Dimensions higher than 3 are achieved when interactions between non-nearest neighbors are relevant.

atemerev(3243) 6 days ago [-]

While these examples use lattices for simplicity, the terminology comes from network science and graph theory, where there are degrees and connections, and 'dimensions' mean something else (multi-dimensional networks is a rich theory on its own).

hirundo(4051) 6 days ago [-]

This essay feels like an instant classic. It's a very thoughtful blend of prose and code, in service of teaching some wildly relevant lessons about networks.

One eye opener is the extent that the 'Degree' parameter reduces the critical threshold.

> The degree of a node is the number of neighbors it has. Up to this point, we've been looking at networks of degree 4. But what happens when we vary this parameter?

And here's the power of this interactive essay, you can try it yourself. It's a toy model but makes a visceral argument. It adds up to the kind of media we dreamed of for the world wide web.

The degree parameter has exploded via social networks, greatly lowering the critical threshold of idea transmission. Our cultural DNA is being revised at a supercritical pace. This piece helps make a little sense of it in a way that static words couldn't.

paob(10000) 6 days ago [-]

Regarding the importance of the degree distribution, we have pretty solid theoretical results on how it's actually the spectrum of the adjacency matrix which acts as a global metric for how well epidemics spread. I always like to recommend Ganesh et al.'s article [1] for how we came to understand this phenomenon but also Prakash et al.'s paper [2] for their theorem which holds for a very large class of epidemic models.

What's pretty interesting is that the largest eigenvalue of the adjacency matrix of an undirected graph lies between the average degree and the maximum degree so the gut feeling you get when playing with the degree of a graph is legitimate.

I will jump on the opportunity do shamelessly self-plug our most recent work [3] on how to modify the topology of a network to have the epidemic go subcritical and quickly disappear.

The basic idea in our paper is to keep the maximum number of edges from the original graph under the constraint that the adjacency spectrum is bounded. Since that's a NP-hard problem we go for an approximation algorithm.

In any case, Melting Asphalt's essays are really an example to follow! A gold standard for expository material!

[1] Ganesh et al. https://ieeexplore.ieee.org/abstract/document/1498374

[2] Prakash et al. https://link.springer.com/article/10.1007/s10115-012-0520-y

[3] Bazgan et al. https://link.springer.com/chapter/10.1007/978-3-030-04651-4_...

atemerev(3243) 6 days ago [-]

Real networks are not lattices; they have varied degree distributions. The famous result of Marián Boguña and Alessandro Vespignani shows, that for scale-free networks (which closely resemble the networks we see in the real world), the epidemic threshold could be arbitrarily low: https://arxiv.org/pdf/cond-mat/0208163.pdf.

I have also published some research among these lines: https://arxiv.org/pdf/1403.5815.pdf

throwawaymath(3357) 6 days ago [-]

Is the SIR model a Markov model? The article doesn't mention whether or not infection can be modeled as a Markov process, but based on the graphics it looks like it.

If I understand correctly the probabilistic infection rate is 'history-less'; in other words, the probability of infecting an adjacent neighbor in the current state is not determined by the state transitions of any previous iterations.

It looks like you could model this naively with a discrete time Markov chain using a 3x3 stochastic matrix and three states: healthy, infected and deceased. I would guess you could do the same thing for the SIS model using states susceptible and infected with a 2x2 stochastic matrix instead.

In either case, modeling the epidemic as a Markov process would let you estimate the probabilities of criticality using the limit of the stochastic matrix. In fact, I think the critical threshold (probability of the epidemic going critical) will be given by left multiplying the initial probability vector by the limit of the stochastic Markov matrix.

soVeryTired(10000) 6 days ago [-]

> It looks like you could model this naively with a discrete time Markov chain using a 3x3 stochastic matrix and three states: healthy, infected and deceased.

Diffusion is a Markov process, yes. But you'd need three states per cell (not sure if that's what you meant by three states).

galaxyLogic(3983) 6 days ago [-]

What does this say about a network such as Hacker News?

careerists(10000) 6 days ago [-]

That it's mostly filled with careerists, and because of that, less and less discussion will prove to be anything except bullshit.

alan-crowe(4098) 6 days ago [-]

You can use the concept of criticality to revise and revive the Sapir-Whorf hypothesis.

In the blunt form: language limits what we can say; Sapir-Whorf runs contrary to every-day experience. Sure, if ones native language contains le mos juste, it is easy to speak ones mind. But if not, the burden is not great. One must speak at greater length, using more words, and forming the intersections and unions of their meanings, to obtain the exact nuance that you intend. This is the routine craftsmanship of every wordsmith.

Early in the essay Kevin Simler posses a challenge 'Here's an SIS network to play around with. Can you find its critical threshold?' What is most interesting is not the numerical value, lets just call it x. What is most interesting is that it is fairly sharply defined. If two idea are both fairly hard to transmit, and hence both close to x, we could easily have a situation in which the burden imposed by a missing mos juste makes all the difference. One idea has a transmissibility just above x and becomes an established staple of the culture. The other idea has a transmissibility just below x so it crops up from time to time but always dies out.

One looks around, admiring the cultural landscape. One idea is present, one absent. Why? Language! While it is wrong to claim that 'a person's native language determines how he or she thinks', we have to take account of network criticality.

The much weakened Whorf-style claim that 'a person's native language burdens their communications with trivial inconveniences' is plausible and unimportant at the individual level. But we may never-the-less find that 'a social network's native language determines which thoughts die out and which ones take over most of the network.'

Compare and contrast with Beware Trivial Inconveniences https://www.lesswrong.com/posts/reitXJgJXFzKpdKyd/beware-tri..., which claims that trivial inconviences have real world potency without needing the leverage provided by network effects.

lukifer(4132) 5 days ago [-]

Sapir-Whorf may run even deeper; Simler's book [0] argues persuasively that social signaling is the primary motivator not only of language, but of cognition itself. So perhaps Sapir-Whorf puts the cart before the horse: we actually think in networks, based on our predictions of what will be socially tractable, and the language/vocabulary is merely an evolving toolkit to that end (with occasional innovation of a viral metaphor/portmanteau/etc).

[0] http://elephantinthebrain.com/

TeMPOraL(3242) 5 days ago [-]

Honestly, weak form of Sapir-Whorf absolutely does reflect my every-day experience; I figured it out way before reading about it, by pondering my 'inner dialogue' - I run it bilingually, switching from English to Polish and back on sub-sentence level, always using the language that makes it easier to think a particular thought.

> But if not, the burden is not great. One must speak at greater length, using more words, and forming the intersections and unions of their meanings, to obtain the exact nuance that you intend. This is the routine craftsmanship of every wordsmith.

This is a nice way of putting it, but I question how 'easy' and 'routine' it is. People can do this, which is why strong form of Sapir-Whorf sounds too strong, but it's not free - and like 'Trivial Inconveniences' article shows, that's enough for it to not be done, especially if alternatives like 'picking up a similar but not-quite-right word' or 'not thinking the thought at all' exists.

I feel this could be especially impactful on imagination (the problem-solving kind), which can be viewed as a randomized reverse-lookup[0]. The brain suggests you things connected to what you're thinking about, and - at least in my experience - they usually come up as words or phrases. If you don't have a word for a concept, you may not think of that concept, and concepts related to it. Not that you couldn't think of it, just you usually and initially won't.

One could think of language as a cache of those 'intersections and unions of meanings' that have proven themselves to be useful. Viewed like this it's an optimization trick, but we observe that everything we do and think is time and energy-constrained, so such optimizations can be the difference (especially on a population level) between how precisely you think a thought before you accept it as 'good enough'.


[0] - Meta: the way I figured out this idea actually involved the brain suggesting me the word 'reverse-lookup', and me going out from there. My native Polish language doesn't have a word for 'lookup', and especially 'reverse lookup', so I wonder what would I came up with if I didn't know English?

unphased(10000) 6 days ago [-]

mot (not mos)

nicklaf(10000) 6 days ago [-]

I am struck in particular by the 'expert --> idea' simulation. It suggests that an effective strategy for beating the competition to the punch in making a breakthrough discovery is to concentrate a diverse collection of expertise (it also explains why it pays to be very social in your career as a researcher).

As mentioned in the article, putting specialists together in the same room is one way to accomplish this, but I can imagine the same happening in the mind of a single polymath, who, though perhaps being mediocre in several subjects, connects enough dots to beat the competition to combining them in a novel way. It might also make sense to recruit a few such polymaths/generalists to be put in your room of distinct experts, since they might serve well as a sort of 'interconnect bus' between them.

rhizome(4094) 6 days ago [-]

I think the conventional term for that strategy is 'the creative process.'

ranger207(10000) 6 days ago [-]

I vaguely remember a short story (I think by either Asimov or Clark) where there were specialists who's entire job was to read and write in a couple of different fields, like, say, particle physicist and farmer. They didn't perform any experiments, but instead kept up with current research in their fields and searched for general patterns that united them. I've never been able to find the story again though...

noir_lord(4039) 6 days ago [-]

> I am struck in particular by the 'expert --> idea' simulation. It suggests that an effective strategy for beating the competition to the punch in making a breakthrough discovery is to concentrate a diverse collection of expertise (it also explains why it pays to be very social in your career as a researcher).

That's what Think Tanks and Research Divisions used to be, You'd put a bunch of smart people from different disciplines together, give them some money and a vague direction and stand back.

Building 20 at MIT is another example I can think off, or the Collosus project during WWII (Tommy Flowers and Alan Turing where from radically different backgrounds - Turing was pure theoretical genius, Flowers was an apprentice mechanical engineer who put himself through night school to learn electrical engineer and then worked for the post office - together they (and others) built the Collosus Mk1).

Historical Discussions: PHP in 2019 (May 15, 2019: 901 points)
PHP in 2019 (May 13, 2019: 5 points)

(902) PHP in 2019

902 points 5 days ago by brendt_gd in 3239th position

stitcher.io | Estimated reading time – 10 minutes | comments | anchor

« back — written by Brent on May 10, 2019

PHP in 2019

Do you remember the popular 'PHP: a fractal of bad design' blog post? The first time I read it, I was working in a crappy place with lots of legacy PHP projects. This article got me wondering whether I should just quit and go do something entirely different than programming.

Luckily for me I was able to switch jobs shortly thereafter and, more importantly, PHP managed to evolve quite a bit since the 5.* days. Today I'm addressing the people who are either not programming in PHP anymore, or are stuck in legacy projects.

Spoiler: some things still suck today, just like almost every programming language has its quirks. Many core functions still have their inconsistent method signatures, there are still confusing configuration settings, there are still many developers out there writing crappy code — because they have to, or because they don't know better.

Today I want to look at the bright side: let's focus on the things that have changed and ways to write clean and maintainable PHP code. I want to ask you to set aside any prejudice for just a few minutes.

Afterwards you're free to think exactly the same about PHP as you did before. Though chances are you will be surprised by some of the improvements made to PHP in the last few years.


  • PHP is actively developed with a new release each year
  • Performance since the PHP 5 era has doubled, if not tripled
  • There's a extremely active eco system of frameworks, packages and platforms
  • PHP has had lots of new features added to it over the past few years, and the language keeps evolving
  • Tooling like static analysers has matured over the past years, and only keeps growing

Update: people asked me to show some actual code. I'm happy to say that's possible! Here's the source code of one of my hobby projects, written in PHP and Laravel; and here is a list of a few hundred OSS packages we maintain at our office. Both are good examples of what modern PHP projects look like.

Let's start.

# History summarized

For good measure, let's quickly review PHP's release cycle today. We're at PHP 7.3 now, with 7.4 expected at the end of 2019. PHP 8.0 will be the next version after 7.4.

Ever since the late 5.* era, the core team tries to keep a yearly release cycle, and have succeeded in doing so for the past four years.

In general, every new release is actively supported for two years, and gets one more year of 'security fixes only'. The goal is to motivate PHP developers to stay up-to-date as much as possible: small upgrades every year are way more easy than making the jump between 5.4 to 7.0, for example.

An active overview of PHP's timeline can be found here.

Lastly, PHP 5.6 was the latest 5.* release, with 7.0 being the next one. If you want to know what happened to PHP 6, you can listen to the PHP Roundtable podcast.

With that out of the way, let's debunk some common misconceptions about modern PHP.

# PHP's performance

Back in the 5.* days, PHP's performance was... average at best. With 7.0 though, big pieces of PHP's core were rewritten from the ground up, resulting in two or three times performance increases.

Words don't suffice though. Let's look at benchmarks. Luckily other people have spent lots of time in benchmarking PHP performance. I find that Kinsta has a good updated list.

Ever since the 7.0 upgrade, performance only increased. So much that PHP web applications have comparable — in some cases better — performance than web frameworks in other languages. Take a look at this extensive benchmark suite.

Sure PHP frameworks won't outperform C and Rust, but they do quite a lot better than Rails or Django, and are comparable to ExpressJS.

# Frameworks and ecosystem

Speaking of frameworks: PHP isn't just WordPress anymore. Let me tell you something as a professional PHP developer: WordPress isn't in any way representative of the contemporary ecosystem.

In general there are two major web application frameworks, and a few smaller ones: Symfony and Laravel. Sure there's also Zend, Yii, Cake, Code Igniter etc. — but if you want to know what modern PHP development looks like, you're good with one of these two.

Both frameworks have a large ecosystem of packages and products. Ranging from admin panels and CRMs to standalone packages, CI to profilers, numerous services like web sockets servers, queuing managers, payment integrations; honestly there's too much to list.

These frameworks are meant for actual development though. If you're in need of pure content management, platforms like WordPress and CraftCMS are only improving more and more.

One way to measure the current state of PHP's ecosystem is to look at Packagist, the main package repository for PHP. It has seen exponential growth. With ±25 million downloads a day, it's fair to say that the PHP ecosystem isn't the small underdog it used to be.

Take a look at this graph, listing the amount of packages and versions over time. It can also be found on the Packagist website.

Besides application frameworks and CMSs, we've also seen the rise of asynchronous frameworks the past years.

These are frameworks and servers, written in PHP or other languages, that allow users to run truly asynchronous PHP. A few examples include Swoole, Amp and ReactPHP.

Since we've ventured into the async world, stuff like web sockets and applications with lots of IO have become actually relevant in the PHP world.

There has also been talk on the internals mailing list — the place where core developers discuss the development of the language — to add libuv to the core. For those unaware of libuv: it's the same library Node.js uses to allow all its asynchronicity.

# The language itself

While async and await are not available yet, lots of improvements to the language itself have been made over the past years. Here's a non-exhaustive list of new features in PHP:

While we're on the topic of language features, let's also talk about the process of how the language is developed today. There's an active core team of volunteers who move the language forward, though the community is allowed to propose RFCs.

Next, these RFCs are discussed on the 'internals' mailing list, which can also be read online. Before a new language feature is added, there must be a vote. Only RFC with at least a 2/3 majority are allowed in the core.

There are probably around 100 people allowed to vote, though you're not required to vote on each RFC. Members of the core team are of course allowed to vote, they have to maintain the code base. Besides them, there's a group of people who have been individually picked from the PHP community. These people include maintainers of the PHP docs, contributors to the PHP project as a whole, and prominent developers in the PHP community.

While most of core development is done on a voluntary basis, one of the core PHP developers, Nikita Popov, has recently been employed by JetBrains to work on the language full time. Another example is the Linux foundation who recently decided to invest into Zend framework. Employments and acquisitions like these ensure stability for the future development of PHP.

Besides the core itself, we've seen an increase in tools around it the past few years. What comes to mind are static analysers like Psalm, created by Vimeo; Phan and PHPStan.

These tools will statically analyse your PHP code and report any type errors, possible bugs etc. In some way, the functionality they provide can be compared to TypeScript, though for now the language isn't transpiled, so no custom syntax is allowed.

Even though that means we need to rely on docblocks, Rasmus Lerdorf, the original creator of PHP, did mention the idea of adding a static analysis engine to the core. While there would be lots of potential, it is a huge undertaking.

Speaking of transpiling, and inspired by the JavaScript community; there have been efforts to extend PHPs syntax in user land. A project called Pre does exactly that: allow new PHP syntax which is transpiled to normal PHP code.

While the idea has proven itself in the JavaScript world, it could only work in PHP if proper IDE- and static analysis support was provided. It's a very interesting idea, but has to grow before being able to call it 'mainstream'.

# In closing

All that being said, feel free to still think of PHP as a crappy language. While the language definitely has its drawbacks and 20 years of legacy to carry with it; I can say in confidence that I enjoy working with it.

In my experience, I'm able to create reliable, maintainable and quality software. The clients I work for are happy with the end result, as am I.

While it's still possible to do lots of messed up things with PHP, I'd say it's a great choice for web development if used wise and correct.

Don't you agree? Let me know why! You can reach me via Twitter or e-mail.

All Comments: [-] | anchor

derefr(3593) 5 days ago [-]

I always wondered, in a world where

1. people seem to hate writing (old?) PHP, and

2. there are a good dozen languages that were created solely because people hate this other language Javascript, which both compile to Javascript and have the semantics of Javascript, just not the syntax or stdlib of Javascript;

...why we didn't end up with languages compiling to PHP, targeting the Zend VM, or whatever you'd have to do to get the PHP module in Apache to interpret non-PHP code.

dajonker(10000) 5 days ago [-]

Because Javascript is the only thing that runs on the client side.

alexandernst(4075) 5 days ago [-]

Does it have an actual debugger already? Or are we still stuck with xdebug?

LeonM(4034) 5 days ago [-]

What's wrong with xdebug?

Sure, xdebug requires a graphical frontend to be practical (I use phpstorm), but so does GDB.

Debugging with xdebug has been a really good experience for me. Much better than python or even GDB for C code.

thibran(3989) 5 days ago [-]

I would argue that Kotlin is the better modern PHP. Beside of a lot of magic Laravel makes PHP okayish, but it is still the worse mainstream programming language.

jbrooksuk(3429) 5 days ago [-]

> but it is still the worse mainstream programming language.

Please back this up.

guggle(10000) 5 days ago [-]

Do we have still to use mb_* functions to deal with utf-8 ? That was a major PITA when I used to write PHP.

malinens(10000) 5 days ago [-]


coblers(10000) 5 days ago [-]

It might not be, but how does it fare against similar languages?

I have no idea why you'd choose it over Go/Node.js/Ruby or similar for your standard webdev stuff.

EugeneOZ(3441) 5 days ago [-]

All apps I saw, written in Ruby, were awfully slow. I don't know Ruby, maybe it's just me, but would be interesting to know examples of not slow Ruby apps.

ilmiont(4034) 5 days ago [-]

Writing with PHP is better than ever... and getting better and better.

And... you don't need too worry too much about the server, or the request lifecycle, or networking... you just write your app, in a language which is, in my opinion, going in the right direction with a stronger slant towards OOP and types.

Of course it's still entirely possible to write garbage PHP code... but it's possible to write garbage in anything.

I'm increasingly proud to say I'm a PHP developer; it still gets a lot of bad commentary but it all tends to be based on historical stigmas which are increasingly untrue. Yet all the original benefits are still here, and the language itself is going from strength to strength...

neop1x(10000) 4 days ago [-]

Even the nature of server-side PHP is bad. Running the script like an executable and using opcache on better case is no good. Also it is not uncommon to upload a php shell using some crappy upload.php forgotten script... Come on, it's Personal HomePage language - you shouldn't do anything bigger than your home page in it.

momokoko(10000) 5 days ago [-]

My issue with modern PHP is that it's essentially becoming Java. And with the JVM and the Java ecosystem, what is the compelling reason to not just pick Java at this point? With Java you are basically writing exactly what you would be writing with PHP, except with more language features and the ability to opt into other languages on the JVM like Kotlin and Scala.

The modern additions are fantastic for projects and teams that have a specific need to stay on PHP. I still work on projects with PHP and am thankful it has so many improvements. But in its current form and direction, I'm not sure how it has not become simply a little bit worse version of Java.

coblers(10000) 5 days ago [-]

Maybe it isn't, but other languages/frameworks have improved the past ten years as well. I don't see the selling point in building something in PHP in 2019 when you have similar if not better/more mature languages/frameworks at your disposal.

Won't your prototyping be faster in Go/Python/Node/Ruby anyway, with a more stable surface to build upon it? I really fail to see where PHP has its place in 2019. For your 'build a minimally viable product fast', the above win. For enterprise stuff, the good old Java/C# win.

joelbluminator(3984) 5 days ago [-]

Very good documentation, stable and widely used is the strong points for php.

jbrooksuk(3429) 5 days ago [-]

> Won't your prototyping be faster in Go/Python/Node/Ruby anyway, with a more stable surface to build upon it?

One of the most understated pros of PHP (IMHO) is that it's so easy to get setup with. You can start hacking on something so quickly. In my experience, Go has not been like that. Node.js also was never as quick.

EugeneOZ(3441) 5 days ago [-]

It only means you don't know PHP enough. No offense, I moved from PHP to Rust after 11 years of using it, but it was definitely not because of tools or speed of development. I just prefer strict typing and Rust's errors handling.

If somebody prefers dynamic languages and want to create some prototype - it can be done even without any frameworks, quickly enough, with millions of libraries for any need.

aczerepinski(10000) 5 days ago [-]

That's how I feel. I'm glad to hear it's improved, but 'not as bad as it used to be' isn't a compelling argument that would make me consider PHP over Elixir/Go.

megous(4089) 5 days ago [-]

Stable? Big e-shop I wrote in 2005 in PHP works without any compatibility issues until today. No need to re-compile anything, to fix bugs in the runtime, etc.

Ruby was some obscure language just gaining acceptance back then, node didn't exist, golang didn't exist, C# has just invented iterators.

PHP showed over the years that it was a stable and painless surface to build on. And I don't see it changing.

mgkimsal(3936) 5 days ago [-]

Referencing 'go' and somehow smuggling in 'mature' in there without mentioning the dependency management situation is ... disheartening. There may be many reasons to like the go language, but the dep management issue doesn't seem solved at all (last viewed in Nov 2018 - maybe it's all magically better and there's 80% community consensus and everything is smooth now?)

larzang(10000) 5 days ago [-]

As a primary PHP developer, PHP has improved by leaps and bounds in the ten years I've been working with it, however there's still a major fundamental issue that has seen improvement but remains unresolved: function overhead.

Contrived example: - https://3v4l.org/eEtFl - https://3v4l.org/8QMFh

Two identical ways to do the same thing, one with nicer FP-like syntax, but because of function overhead even on 7.3.x it's significantly slower.

If you're building large-scale PHP applications you have to stay away from a bunch of shiny new features in anything remotely performance-sensitive, and it causes you to write worse code in general (e.g. this monolith would be a lot cleaner as multiple sub-functions, but it's going to be called in a loop 5 million times so I have to keep it ugly).

With compiled languages you can write clean code and then have the compiler optimize it. With PHP, you have to make development-time sacrifices in legibility and maintainability in order to not make runtime sacrifices in performance, which is both worse as an any-stage dev and a huge footgun as a junior dev.

efficax(10000) 5 days ago [-]

Do you really find function overhead to be that great of an impact on performance? I work on a php app that sees around 15k/rps and I use a lot of functional patterns (feels best to me, my brain was formed in a lispy way).

By far database time remains our biggest bottleneck, especially since we finally made it onto PHP7

ashton314(4098) 5 days ago [-]

I worked at a small PHP shop for about two years. There was a ton of legacy code that was utterly terrifying: a 13-deep nested `if` and the like, global variables, etc.

I convinced my boss to let me start using Laravel. It made development worlds more sane. It made development orders of magnitude easier. Laravel does a lot for you, and they've thought about how to solve some tricky problems in clever ways.

That being said, I would never recommend PHP as the language to solve any given problem. It is now less of a terrible language and more of simply a mediocre language. I can't think of anything it does particularly better than another language. For any given problem, there is most certainly a better language to solve it, be it Ruby, Elixir, Clojure, Rust, etc.

I've been working with Elixir and Phoenix for web development for a few months now, and it is sooooo much better. It's hard for me to enumerate all the things, small and large, that reduce developer friction in contrast to even PHP's best framework.

IloveHN84(10000) 5 days ago [-]

You forgot one thing: PHP has a far larger community and packages compared to exotic languages like Elixir. Everything is documented and there are tons of resources out there to solve a problem. It's like C++ for the web. Why does C++ still live? Community, extensively tested and adopted by large organizations. Well PHP is the same. Until big companies are out there using PHP (consider Vimeo or Flixbus or Groupon as example), because it is perfomant, it will fail to die.

xrd(3732) 5 days ago [-]

I think the way to analyze a language and framework should be: 'what does this framework permit the worst developer on the team to do?'

I think the story for Php in this regard has not been a good one historically.

You can put processes like PR review, and add coding standards, but getting things done will always trump those things, especially when you are bootstrapping.

If you can tell me modern PHP is better than the alternatives at preventing bad behavior (or encouraging good behavior) then I'm interested. But, this post didn't move me to change my mind there.

xrd(3732) 5 days ago [-]

For example, coding standards:

`There MUST NOT be a hard limit on line length; the soft limit MUST be 120 characters; lines SHOULD be 80 characters or less.`


A language that has to permit this kind of thing (or thinks it has to) is just going to lead to developer infighting. This is an important issue because GitHub PRs are going to be unreadable if one developer can insist that 'this line needs to be 145 characters...' and then no-one can diff changes to that code. You can break the PR process with this kind of thing.

tannhaeuser(2996) 5 days ago [-]

Rather than chasing MVw or OO trends of the JavaScript ecosystem and become more like JavaScript or Java, so to say, why don't the PHP developers identify and build on the unique strengths of PHP? Which are IMO: the large installed base in classic web hosting, and the original purpose of PHP as a high-level scripting language embedded in otherwise static HTML as in '<?php ...>'. There is a huge room for improvement there since PHP's quick-and-dirty hackjob of a page-embedded language isn't HTML-aware and thus prone to injection attacks, which has caused uncountable attacks, botnets, and other problems for PHP and non-PHP sites alike, in particular when coupled with PHP's target demographic. This is especially painful because PHP embeds via SGML mechanisms (processing instructions), when SGML has all the features needed for context-aware, injection-free templating. If PHP can improve that story, and stop becoming the language of unintended consequences and usage-outside-its-comfort-zone, then it might earn a lot more respect and consideration.

simion314(4125) 5 days ago [-]

Your claims are vague, do you have a specific example of a new feature and a new mainstream framework that has the security issues you mention?

zanny(10000) 5 days ago [-]

If you want integrated scripting with web pages PHP cannot compete with the language already being embedded in pages to run on the client. The rise of Node et al was in large part the fall of PHP.

bayesian_horse(10000) 5 days ago [-]

I actually see no reason to use PHP over other languages except if the project already has a large codebase in PHP.

In all other cases I keep thinking: 'Have you seen Django yet?' And probably you can substitute ROR, Node, ASP.NET Core etc

Lazare(3036) 5 days ago [-]

> why don't the PHP developers identify and build on the unique strengths of PHP? Which are IMO: [...] the original purpose of PHP as a high-level scripting language embedded in otherwise static HTML as in '<?php ...>'.

I really don't think that's a strength of PHP. It's very bad at that, and that's not been a focus of the language for decades. PHP, even with the work you suggest, is always going to be inferior compared to something like Liquid, Jinja, or Twig.

It's arguably true that PHP should focus on improving the things that make people use it, but nobody is using it because of that. Even the ability to run PHP on shared hosting is increasingly irrelevant as we move past that being a useful feature.

DJBunnies(10000) 5 days ago [-]

I'm hiring PHP developers in Boston!

Doctrine, symfony, AWS, all backend API work.

esistgut(3702) 5 days ago [-]

If you are doing API with Symfony you may want to check API Platform: https://api-platform.com/

joemaller1(10000) 5 days ago [-]

I've been using PHP semi-regularly since the late 90s. It's been empowering and infuriating.

The ecosystem is radically better than it used to be. Composer and the Packagist registry are as mature and dependable as npm, PyPI or RubyGems. (despite hours lost to my own namespace screwups). I'm also happy to see the Prettier-PHP project automating and enforcing code-style standards.

For whatever reason, I often feel clumsier after working on a PHP project. After working in other languages like JS or Python, I tend to feel like I've leveled-up my skills.

One thing I wish PHP would address is the inconsistency in its map-filter-reduce functions -- their argument-order doesn't match (array, callback) vs. (callback, array):

    array_filter($arr, $fn)
    array_map($fn, $arr)
    array_reduce($arr, $fn)
The amount of cognitive overhead I've wasted on those is ridiculous.

- https://packagist.org/

- https://github.com/prettier/plugin-php

benp84(4099) 5 days ago [-]

I hear this complaint a lot but I find those argument orders completely intuitive.

You filter (1) an array with (2) a function. You map (1) a function over (2) an array. You reduce (1) an array with (2) a function.

It follows exactly what I'm thinking when I type it. To reverse the orders would be, what? 'Mapping an array with a function?' 'Filter a function on an array?'

momokoko(10000) 5 days ago [-]

From a performance perspective, you likely want to do those with `foreach` anyway in PHP. Especially when the array function is fed a closure. I know it is not as interesting as using those functions, but because of PHP internals, `foreach` is almost always preferred.


eithed(10000) 5 days ago [-]

Laravel has Collection class that wraps arrays - as such you can do `(new Collection([your array]))->filter(function(){})` or `(new Collection([your array]))->map(function(){})`. It also adds other helper methods. I guess Symfony has something similar

Reference: https://laravel.com/docs/5.8/collections

taytus(1069) 5 days ago [-]

Infuriating? I'm not here to defend php but even when I do agree that inconsistencies are not nice, nothing stop you of creating your own wrapper for these functions.

Infuriating? really?

owenwil(1155) 5 days ago [-]

I've actually been really impressed with Laravel after switching back to PHP for a few projects. Not only is the developer tooling experience some of the best I've experienced, it's just really the only framework I've ever experienced with a high quality ecosystem of tools—from Forge[1], which makes it dead-simple to deploy a Laravel app into production to things like Horizon, for managing Redis queues.

A great example of this in action is Laravel Spark[3], a first party base for building paid SaaS apps. I built and launched a writing tool, Write Together[2], to the world in under three weeks, payment systems and all, and got 150 paying customers in a matter of a few weeks. One hell of a great way to MVP an idea and build something useful, in a low amount of time.

I'm basically developing two Laravel apps full-time at the moment, and it's the most fun I've had in years...compared to the hellscape of NPM dependencies and other complexities I'm usually bogged down with. Composer, the package distribution system, really needs work and is incredibly slow, but other than that—I'm really happy.

[1] https://forge.laravel.com

[2] http://writetogether.space

[3] https://spark.laravel.com

SXX(3041) 5 days ago [-]

> Composer, the package distribution system, really needs work and is incredibly slow, but other than that—I'm really happy.

It's also use abysmal amounts for RAM for some reason. I managed to use Chrome headless for my app with only 512MB, but not Composer.

tabtab(4053) 4 days ago [-]

In my opinion the framework matters more than programming language. One typically doesn't need the 'odd' or 'fancy' features of a programming language if the framework is well designed and/or a good fit for the shop's conventions or needs.

userSumo(3949) 5 days ago [-]

No doubt its great, but does anyone know how does it compare to django for saas apps?

agumonkey(944) 4 days ago [-]

kudos to laravel team (and symfony, since IIRC, laravel is built on top of sf bundles)

munk-a(4028) 4 days ago [-]

Laravel is interesting for greenfield stuff. I'm working on a legacy project and we've had a bunch of good times with zend as it allows you to transition to it more naturally piece-by-piece.

nickjj(2665) 5 days ago [-]

There are comparable things with other frameworks too.

For example I wrote a course on building a SAAS app with Flask. It's available at: https://buildasaasappwithflask.com/

It covers everything about user registration, profiles, subscription billing, 1 time billing, invoicing, and about 50 other things you would likely want to do in a SAAS app or any application really.

The course comes with the source code along with 15+ hours of video explaining every line of code in stages, life time free updates and life time support for close to half the price of what Spark charges just for the source code for 1 site license (with the Flask course you can use the code in however many projects you want).

Spark's business model seems interesting though. I don't use it personally but do you just get the source code and nothing else? How do they limit you to 1 site if you end up with a local copy of the scaffolding / code base?

cntainer(10000) 5 days ago [-]

Agreed, Laravel provides the best dev experience in the PHP world.

The funny thing is that Laravel basically took the Ruby on Rails philosophy and applied it to PHP. Just looking at their site it's clear they have a very similar vision: simple, fast and fun. One can always find differences but the basic principles are the same.

IMO this is great kudos to the Laravel guys, whatever language I use I always look for the tooling that follows the KISS principle. In PHP you have Laravel, in Java/SCala you have PLay, etc.

I need to add that, because of the job, I did a lot of PHP, Ruby, Java, Javascript with many different frameworks and I can say that Rails is still the best for me when it comes to dev experience if you need to move fast. Laravel has come a long way but it's still not at the same level of maturity in terms of tooling and ecosystem. As a bonus with Rails you use Ruby which is designed for programmer productivity and fun.

Anyway, whatever language/framework you use just keep it simple, that's the most important decision you can make for the health of your project.

stephenr(3939) 5 days ago [-]

> Composer, the package distribution system

a package distribution system.

Composer suffers some of the same issues NPM does, IMO: it encourages stuff like 'install dependencies at deployment' and 'why write it when you can blindly trust someone else's code'.

Not everyone who uses PHP uses Composer.

joeldg(10000) 5 days ago [-]

In PHP shops I have worked in we would just hand new interns the 'Object-Oriented Programming with Java' book and have them go through that as everything in that book would transfer directly to PHP (and they all new Java).

Then, Laravel, that is the best framework I have ever seen in any language. Comparing it to rails is a huge disservice as it is far superior by almost any measure.

Intermernet(4122) 5 days ago [-]

Last time I used composer (about 3 years ago) it had a nasty memory leak that eventually meant I had to run the build on a higher specced instance than the actual application. This was mainly due to the ridiculous dependancies of the application, but it was almost a show stopper. Has this improved?

throwayEngineer(10000) 5 days ago [-]

Laravel was fantastic. The first month I followed the tutorial, I was bitter about the complexity, but when it came time to building my back end, it took only a few weeks.

I would recommend.

no_wizard(4114) 5 days ago [-]

I hear you on composer being slow. One thing I found that works extremely well to speed up composer installs:


Give this package a try, it really worked for me to speed up package management with composer.

ljm(10000) 5 days ago [-]

I cut my teeth as a developer on PHP, back in the 4.3 days before they'd really placed their bets on OOP, and we were dealing with mysql_real_escape_string, needles and haystacks, and haystacks and needles. No PDO, XSS and SQL injections up the wazoo.

I can't say I enjoyed working on any of those projects compared to the kind of stuff I do now (and how much I've learned since), but I've been remarkably surprised by some more recent codebases, despite the architecture sometimes feeling a little over-engineered (Symfony 2, for an outdated example).

It's not the style of code I enjoy writing at all but I've seen some incredibly clean stuff that would put a lot of Rails apps to shame. And I'd take that PHP over a badly maintained Rails app any day.

I wouldn't use PHP by choice still, but I no longer care to be snobbish about it. People are doing some good stuff in it, just the same as has happened with JS.

tacone(4043) 5 days ago [-]

To speed up Composer you can install Prestissimo [1] globally, it speeds things up considerably.

[1] https://github.com/hirak/prestissimo

hguhghuff(3844) 5 days ago [-]

The thing that matters is language consistency... A language with a pure clean vision of itself in which the programmer can guess at syntax because they understand the general syntax principles to which the language adheres. Nothing here says PHP has been fixed in this regard.

Python made the big leap and fixed some huge problems when it went to python 3 - yes it's migration approach was a total fail, but it further cleaned up what was and already clean and consistent language.

The only thing that would have really interested me in this post would have been to hear that PHP had been cleaned up into a consistent syntax, but that's not what this article says.

deanclatworthy(3931) 5 days ago [-]

I've been coding in PHP for almost 20 years now. I think I can count on one hand the number of times where I have had an issue regarding inconsistent function signatures. It becomes second nature, and if I forget, the documentation is available online or in worst case built into my IDE.

I can understand why people make this into such a pain point, but quite frankly it isn't for anyone who works day in day out with PHP.

furicane(10000) 4 days ago [-]

I often run into this argument - 'inconsistent syntax', followed by zero proof. Then I wonder - is the person making the comment even using PHP or is the person behind the comment.. even capable programmer?

Yes, PHP has been 'cleaned up' and you have always had the option to use clean, concise way of coding without language interfering or hindering you in any way.

There's no programming language out there that makes up for the sloppiness and inability of the person behind the screen.

colshrapnel(10000) 5 days ago [-]

What are you talking about is just a syntax. Which is not an issue for a PHP dev nowadays. I am typing much more custom method names than vanilla PHP functions. And when I have, my IDE shows me the right syntax after a few keystrokes.

So, it's apparently a good reason to hate, but by no means a reason not to use PHP.

wvenable(4063) 5 days ago [-]

> The thing that matters is language consistency.

To be fair, we are talking about standard library consistency and not language consistency. PHP is a consistent language but it's standard library is very low-level. In Python, you don't call the mysql C library functions directly, you use an object-oriented abstraction. In PHP, you can call those mysql C library functions directly or you can use an object-oriented abstraction.

jplayer01(10000) 5 days ago [-]

> Python made the big leap and fixed some huge problems when it went to python 3 - yes it's migration approach was a total fail, but it further cleaned up what was and already clean and consistent language.

People keep saying it was a huge failure, but honestly, I don't see how it was supposed to be done better otherwise. Either you make breaking changes that will impact your entire ecosystem or you don't and live with the same cruft from 20 years ago.

simion314(4125) 5 days ago [-]

That would break too much things and the benefits will not be enough. They could add some new things, deprecated the old things and keep them all but then you get a manual twice as big. All the changes in PHP feel pragmatic, like better SQL support, better cryptography support, better defaults, better performance and less on cool looking syntax or latest cool features.

IMO pragmatic is good, especially when you have to use an existing code base and you have to upgrade it to the latest supported version you don't want a Python 3 migration story(I migrated a medium project to latest version and the only problem was a cryptographic function that was used to generate some random looking strings that was deprecated, I replaced with the new safer function and done , my project is now compatible with the older and the new version).

nucleardog(10000) 4 days ago [-]

> The thing that matters is language consistency... [...] Python made the big leap and fixed some huge problems when it went to python 3

And yet, here we are over a decade along and the Python project is still maintaining Python 2 and I still need to maintain a copy of Python 2 on my computer because of the number of actively maintained projects still using it. How many developer-hours that could have gone into doing something else have instead gone into this 'language consistency' project?

At any point in the past 25 years someone could have forked PHP to clean the syntax up and accomplished the exact same thing -- and yet apparently nobody has seen the inconsistency as a big enough issue or time sink to do so. I would take that as prima facie evidence that this isn't nearly as important as you seem to think it is.

Meanwhile, one of the strengths of PHP in my opinion has been how carefully they have maintained and managed backward compatibility. While 'move fast and break things' might be the new norm, there is still a huge contingent of developers and businesses that see value in slower, more considered change.

LoSboccacc(4110) 5 days ago [-]

eh my problem with php never really was performance (up to a point) or the language itself, it was the default library making figuring out every operation an exercise in googling

like, why the weird _ difference between strtok and str_split, why str_replace has the search first and string as third parameter but in strpos the source string is the first parameter and then the search term the second

it's all confusing and weird and while someone that's a specialist is maybe at ease with this, for a generalist going in and out the language as needed it's frustrating.

maxxxxx(4001) 5 days ago [-]

When I did more PHP I quickly ended up writing my own sort of standard library with consistent functions and unit testing. It doesn't really take much time and makes things much more predictable.

ch_123(3373) 5 days ago [-]

> Do you remember the popular 'PHP: a fractal of bad design' blog post? The first time I read it, I was working in a crappy place with lots of legacy PHP projects. This article got me wondering whether I should just quit and go do something entirely different than programming.

Seems like a pretty extreme reaction - how about trying a job with something other than PHP?

brendt_gd(3239) 5 days ago [-]

You're absolutely right. I think it's fair to say that I was closing in on a burnout, which made me think irrational.

Like I said: I was lucky being able to switch jobs, and re-discover my passion for programming

baybal2(2668) 5 days ago [-]

> PHP isn't the same old crappy language it was ten years ago

But it is too late. JS on the backend has nearly completely ate its mindshare in its target demographic.

I think, JS itself risks ending up like this if core developers in Node and TC39 will not begin to think of the need for JS 2.0 and fixing fundamental design issues, and bug-o-features.

swah(537) 5 days ago [-]

I usually code a backend in Go or Python, but recently been thinking if I should just abandon my 'prejudice' (JS sucks, Node sucks, Node is single threaded, there are no SQL libraries, etc) and move to JS on the backend as well, since that's what I already use on the frontend - be it web or React Native.

(I actually find JS very expressive and oriented towards a style of 'table-oriented programming' which is useful for software that changes a lot like UI, IMO).

d33(3483) 5 days ago [-]

Is the debugging experience finally any more bearable?

Lazare(3036) 5 days ago [-]

The debugging experience is very good, actually. Certainly comparable to other languages.

jbrooksuk(3429) 5 days ago [-]

Xdebug remote debugging is great, especially when paired with Laravel Valet (which supports more than just Laravel apps).

cutler(4127) 4 days ago [-]

I just took a random sample from Spatie's PHP libraries - https://github.com/spatie/crawler/blob/master/src/Crawler.ph.... Most of the code consists of:

    public function getSomething(int $someParameter): SomeType {
        return $this->someParameter;
If this, along with the fanfold block comments you see in modern PHP, is considered 'clean code' I'll take procedural PHP4 any day of the week. Why PHP5 had to be reborn as the scripting version of Java I'll never understand.
kbenson(3329) 4 days ago [-]

It's been close to a decade since I did anything in PHP (PHP5), but since I know PHP5 had public (and private,and protected) accessors back then, but didn't have any real typing of parameters if I remember, I assume this is a design pattern to enforce typing on the object parameters. Also, a good amount of them have some other guard that throws an exception on specific problems. I think it's a valid design choice to decide that since some object variables may need special handling that is accomplished through accessor methods, just create them specifically for each item that needs public access, and while you're at it throw in a few sanity checks. That provides a consistent interface, where accessors are both named and act in the same manner (i.e. you don't have mixed method call and direct assignment).

wyqydsyq(10000) 4 days ago [-]

I respect the leaps PHP has come as a language and runtime in recent years, however I still can't understand why anyone would elect to use it for greenfields projects.

The developer experience might be better than it's ever been, but even as pointed out in this article, it still has all the old issues like inconsistent core API, vague global context, and way too much implicit magic and guesswork. Not to mention it still relies on running a separate, third party HTTP server, in contrast to Node.js, Java, Python etc stacks where the HTTP server itself is a native construct of the language/runtime.

I can't see any single use case where there isn't a more appropriate alternative to using PHP. The only feasible reason I can see someone would use PHP for a greenfields project in this day and age is that they simply don't know any better.

'I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.'

jtreminio(3995) 4 days ago [-]

> it still has all the old issues like inconsistent core API

Seriously, people keep bringing this up. Why? Are you a machine? Did you memorize the API to every single language you write in? I type `strst` and my IDE autohints `strstr()` and the argument order.

> vague global context

What's vague about it? What does global context even mean in this sentence? Do you mean like super variables? Static properties? Can you provide some detail?

> Not to mention it still relies on running a separate, third party HTTP server, in contrast to Node.js, Java, Python

You _can_ run your website using only the built-in webserver for those languages, but are you really going to do that? Or are you going to put Nginx or Apache in front to do what they're made to do: be webservers? Hell, there's Unicorn, Passenger, Puma, all aimed at doing exactly this.

And surprise, surprise, PHP also has a built-in webserver you _can_ use, but shouldn't.

There's even ReactPHP, Swoole, Amp, all with webserver capabilities.

Sounds more like a person who knows PHP in passing, or may have used it lightly, or long ago.

chairleader(4123) 5 days ago [-]

I'm working in a vanilla PHP codebase right now, and I see all sides - the fractal design fails and the surprising improvements to the language. I've just read up on the architecture of Laravel, and given the public opinion of it, must be a very nice framework.

What strikes me most about PHP is the fundamental request/response execution model. Your execution context begins when a request is received and ends when we send the last byte or terminate the request. There's no startup healthchecks, no cache warming or any other bootstrapping of your service unless you jump through convoluted hoops on your own. Your service is either accepting requests or it isn't. You either lazy load your data into APCu or you don't. I've leveraged AWS healthchecks to achieve these in the past, but that path is not very maintainable.

Inevitably in the course of maintaining a service, I find use cases for a phase of execution that should occur before the server is live or shared static memory that should be available at all times, but (unless I don't know something) those are things that PHP doesn't do.

This is the reason that I find PHP to be a bizarro language - for its fundamental design assumption!

IloveHN84(10000) 5 days ago [-]

Actually you can compile PHP code and optimize it.

DCoder(10000) 5 days ago [-]

This execution model is just good old CGI scripts [0].

[0]: https://en.wikipedia.org/wiki/Common_Gateway_Interface

johnday(10000) 5 days ago [-]

Thanks for sharing. The thing that I am most concerned about with PHP is not, strictly speaking, the language itself. The syntax is slightly odd but otherwise fine. And the semantics are much the same as any similar language.

The real problem that I find with PHP is that the designers seem (from an outside perspective) to take a similar approach towards language backward-compatibility that, for example, C/C++ have. There is some rejection of the idea of getting rid of the old, bad stuff. Plenty of new, cool features is all well and good - but there are still holes in the floor that new learners will fall through.

[To clarify, I understand the case made by the C/C++ committees on supporting old code. But nobody's programming pacemakers in PHP, one would hope.]

gog(10000) 5 days ago [-]

The amount of legacy applications being run on the internet is huge. Breaking stuff would put a great financial burden on the companies running those.

It is a sensible approach IMHO.

What kind of bad stuff are you talking about? If you talk about function names and parameter ordering not being consistent that is not an issue for developers using the language daily, most of them are using an IDE so it doesn't matter as much as people who don't use it say.

Lazare(3036) 5 days ago [-]

I think that is largely changing. Quite a lot of breaking changes to clean up ancient cruft are slated for 8.0. For example, PHP has infamously had broken ternaries (they're left associative, which is never what you want, not how any other language works, and a rich source of bugs). Nested ternaries are being deprecated and will be removed completely in 8.0; in a future version correct (right associative) behaviour will be able to be added.

Obviously there's a long road to go, but being willing to change how an operator works on that level demonstrates a willingness to break compatibility when necessary.

return1(4131) 5 days ago [-]

Yes, that is the reason why i chose to write stuff in it (also because i knew it relatively well). PHP has not been fancy for 15 years now , which is a good thing if you have more interesting things to do rather than refactoring your old websites because some college kids wanted to 'move fast and break things that work'. If PHP decides to go with major breaking changes, i will probably quit it.

jbrooksuk(3429) 5 days ago [-]

I heavily agree with this. The standard library, whilst great, has so many inconsistencies which I wish they'd fix and standardise.

Arbalest(10000) 5 days ago [-]

One of the main takeaways I got from the original Fractal of Bad Design post was the myriad of inconsistencies in both representation and execution of code. This was barely touched upon, mostly this post focused on how PHP has added features and ecosystem. But then so has Javascript.

Kiro(3702) 5 days ago [-]

And JavaScript is the best language in the world (with TypeScript).

simion314(4125) 5 days ago [-]

Unfortunately this is true and it can't be changed, the functions are named and parameters are soemtimes inconsistent like strpos and str_replace, I would have to use the docs all the time if my IDE wouldn't show me the param names.

But after you get productive with PHP there is no feature that I think I am missing from other similar languages.

hadriendavid(10000) 5 days ago [-]

Do you still need a dollar sign for variables? :troll:

megous(4089) 5 days ago [-]

You pay for every useless variable. It keeps the variable count low.

simias(4066) 5 days ago [-]

As far as I'm concerned the bridges have burned a long time ago and I definitely can't see myself giving PHP a second chance. That being said, if I'm wrong and PHP has really managed to become a decent language (or at least something that's not completely insane) its defenders should focus on actually showing what modern PHP code looks like.

Is performance that much of a big deal for most people? In a world where Ruby on Rail exists I find that hard to believe. Server are cheaper and vastly more powerful now than in PHP's infancy, I'm sure that for the vast majority of use cases PHP's performance (whatever it is) is good enough.

The infamous 'Fractal of Bad Design' wasn't about that, it was about the ridiculously inconsistent and error-prone API, the insane defaults, the counter-intuitive behaviour of '==', the headless chicken development roadmap where maintainers would add features because they were popular in other programing languages without trying to figure out if they had their place in PHP,...

Surely a lot of this has turned into technical debt? Even assuming that 'modern' PHP managed to come up with better ways to deal with all of this, I assume that these obsolete functions and operators still linger for backward compatibility? If so how do you avoid them?

Again, I personally don't really care, but if you want to win new converts who haven't been as scarred by PHP as I have been I think that's where you should focus your efforts. Maybe somebody should write a point-by-point rebuttal to the 'fractal of bad design' article?

jbrooksuk(3429) 5 days ago [-]

> [...] its defenders should focus on actually showing what modern PHP code looks like.

Laravel does this very well, not just by being a great framework but also in its ecosystem (Envoyer, Forge, Spark, Nova, Horizon, Socialite) and its documentation (https://laravel.com/docs/master).

swalsh(2015) 5 days ago [-]

I was pretty anti-PHP before I worked for a multi-billion dollar company that was built on top of it. It's not a beautiful language, but it offers a lot of side benefits in terms of dev-ops, and tooling, and frameworks. As a company, the language was not something that held us back. The company moved really fast, and PHP was a large part a big part of why. In my career over the past 14 years I've done web work using ruby, python, C#, javascript, and Java. but that ugly PHP site was the one that impressed me the most. People moved fast in it, our tooling never held us back, everything just worked. It was the most utilitarian environment i've worked in. The first time I touched PHP, it was a spaghetti disaster... but today, next to python It's my go to language. It gets the job done faster, and it's maintainability is just fine. It really doesn't matter if sometimes things just look odd.

dangerface(10000) 5 days ago [-]

Your comment reads as 'I like ruby therefor php is dumb'

> Is performance that much of a big deal for most people? Yes obviously. Performance is important no matter the language.

> the counter-intuitive behaviour of '==' You should learn the languages type system instead of assuming it works how you think. Again this is true of every language.

> I assume that these obsolete functions and operators still linger for backward compatibility? If so how do you avoid them? The built in linter gives you a warning that its obsolete.

The inconsistent naming and arguments. Dumb defaults and thousands of functions in a global namespace is the real problem and there is no solution to it. Oh and calling functions is unacceptably slow, that they can and need to fix the rest of this is just I like my language better.

pdimitar(10000) 5 days ago [-]

Agreed, articles like these do absolutely nothing to address PHP's insane warts, some of which you mentioned.

That's not helpful. Laravel can objectively be the best web app framework in the world and I still won't touch it, because of PHP.

Predictability, minimum WTFs per minute, consistency, sane defaults -- these win over short-term convenience, every time.

return1(4131) 5 days ago [-]

> Is performance that much of a big deal for most people?

For most people outside VC-funded startups, yes it is. It is also an environmental concern, imagine if 80% of the web was running on ruby.

brendt_gd(3239) 5 days ago [-]

> its defenders should focus on actually showing what modern PHP code looks like

This is one of my personal side projects, written in PHP and Laravel: https://github.com/brendt/aggregate.stitcher.io

Here's a list of all OSS package we maintain at work: https://github.com/spatie

chriswarbo(4069) 5 days ago [-]

> ven assuming that 'modern' PHP managed to come up with better ways to deal with all of this, I assume that these obsolete functions and operators still linger for backward compatibility? If so how do you avoid them?

I think the 'path of least resistance' is important: developers are time-constrained, understanding-constrained, lazy (if they're virtuous), etc. There's a big incentive to do whatever is easiest/quickest.

When I last used PHP, about 5 years ago, there were OOP APIs cropping up to replace many of the standard global functions; namespaces had just been introduced; closures had become useful; frameworks like Symfony (and Drupal 8) were becoming established, rather than the old 'plugin' approach of throwing around arbitrary code and hoping for the best; dependencies were being managed by composer; files could be autoloaded from sensible locations; testing frameworks like PHPUnit and PHPSpec had become best practice; etc.

Yet all of those things were opt-in and verbose. The path of least resistance was still:

        Hello <?php echo $_GET['name']; ?>
(For non-PHP programmers, this is appending a GET parameter straight into the page, which is an XSS vulnerability). Doing things 'properly' took a great deal of effort and discipline.

Compare this to something like Java: it favours class-based OOP so much that even 'hello world' needs a class. The path of least resistance is to do things 'right' (from Java's perspective). Haskell's path of least resistance is simultaneously easier ('hello world' is just `main = putStrLn 'hello world'`) and harder (`main` uses the `IO` type, whose API enforces certain conventions).

Deprecation warnings, linters, etc. can help with this; but PHP's only real strength is its installed base of code and developers; changing the language too much would throw away this advantage (akin to being a new language, see Python 3 and Perl 6); not changing it enough prevents the more serious and/or systemic issues from being dealt with.

I wish the language designers and users luck, but I'm really hoping to never use it again ;)

zaphar(4034) 5 days ago [-]

Yeah, I've been burned too badly by PHP. There are languages I haven't done in years that I would go back to if there was a compelling need to.

PHP is not that language. Far to many sleepless late nights trying to clean up some security hole. PHP is like that abusive Ex that everyone says has changed. It may be true, in which case, good for PHP

But I won't be putting myself in that position again.

baybal2(2668) 5 days ago [-]

What ended PHP were not the worst parts of its design, but upstream's unwillingness to simply change the language for better.

See, as a commercial enterprise (begware as a business) all kinds of 'foundations' and sponsorship pools surrounding the language already accumulated a big enough pool of clients who were happy enough with PHP as it was in 4.0 era. They had no intensive to progress. Especially if their business depended on 'fixing brokenness'

Open source and sponsorship does not always mix well. Just as with front-end frameworks/libraries that live off sponsorships, eventually it leads to people prioritising pleasing sponsors, and working on pushing their software over improving the software itself.

The current allergy in JS world about genuinely required breaking changes is all about that as well. Any times a talk of genuine 'JS 2.0' starts to entertain minds of powerful players in the JS world, there will be tons of people with commercial interest coming and extinguishing the conversation with 'no, we absolutely can not ever break anything, ever, even if it is already broken'

Breaking changes in JS world do occur, but most of them being near accidental, security related, or being done as part of actual sabotage like intentional breaking of synchronous AJAX requests after they were shipped.

My logic is, if breaking changes are still unavoidable in JS, why not to do them in a controlled manner, rather than through sneaky sabotage ops like one above?

k_bx(4116) 5 days ago [-]

> Server are cheaper and vastly more powerful now than in PHP's infancy

Slightly off-topic, but I think that nowdays it's actually less of an argument than when PHP started becoming popular. The thing is, back in the days, statically typed languages were just bad and unproductive. Today, they are way better, and the fact that I can run my service on $3/month server with 900M of RAM and not think about the price tag at all is actually quite a decent argument to stay away from dynamic languages (but not the main one, tbh).

jaabe(10000) 5 days ago [-]

One of my old friends has been working with php since forever, we've teased him over the years.

Since then we've moved to a world where every program is web-based. I mean, even huge enterprise systems in healthcare run on some JavaScript MVVM framework and a web-backend in the cloud.

The truth is that php is more adapt at handling this than a lot of the stacks you see in enterprise. It's really kind of silly, but I don't think we'll ever adopt php either exactly for its bad rep. But sometimes I wonder if we shouldn't.

onlyrealcuzzo(4039) 5 days ago [-]

It's gotta be the easiest stateless system to get started with, right? It's decidedly not a BAD language anymore. It probably has some of the best tooling for said system as well. And it has a GIGANTIC community.

That's worth a lot to most people, I think?

brlewis(1447) 5 days ago [-]

>I assume that these obsolete functions and operators still linger for backward compatibility? If so how do you avoid them?

This. If there isn't a linter that bombs out on those things that used to be standard PHP, modern PHP is a non-starter. There are too many bad examples out there that will make their way into modern code if you let them.

aacanakin(3800) 5 days ago [-]

'While async and await are not available yet, lots of improvements to the language itself have been made over the past years'

It's 2019 guys. Please.

zanny(10000) 5 days ago [-]

If you are looking for speed tacking an event loop on an interpreted scripting language is trying to dig a hole with a rake.

Its why the demand for such functionality was almost wholly absent from Python, PHP, etc for so long. Asyncio is a throughput boon even in a single core environment, which dates its practicality back to the 90s - and it was found there, in pretty much every graphics stack and COM. C apis have used what is fundamentally a promise since the 80s,

But if you are in a situation where you want those kinds of performance characteristics where you have work to do during blocking operations you probably don't want to be writing that code in an interpreted language to begin with.

Last week I was optimizing some performance critical Python but when considering my options I just ported the whole thing to Boost / C++ and got an ~80x speedup over the whole loop.

Theres only a narrow range of problems actually best solved with interpreted asyncio, pretty much only in the space where introducing a build system and doing language binding isn't worth it but the gains from being non-blocking are. They exist, and its definitely not a bad thing to have async support in your interpreted language, but it definitely isn't mission critical in the slightest.

yzssi(10000) 5 days ago [-]

It's good if they take some time to think about the implementation instead of doing a crappy job like Python did.

richardwhiuk(10000) 5 days ago [-]

And yet ~everything in https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/ is still true.

All that's happened is that the language is even more complex, has even more baggage.

alexandernst(4075) 5 days ago [-]

I'm not really sure why this comment is getting downvoted, but literally every single thing in that post still holds true.

jonwinstanley(10000) 5 days ago [-]

It sounds a bit like you are just trolling this thread?

The PHP community has made a ton of improvements over the last few years, surely you can't criticise a project for improving?

wvenable(4063) 5 days ago [-]

Everything in fractal of bad design wasn't even true when it was written! 1/3 of that article was factually incorrect when written and still is. 1/3 is just opinion.

My rebuttal from 2012: https://news.ycombinator.com/item?id=3821029

benatkin(3116) 5 days ago [-]

You could make a similar article about Ruby, Python, or Node.js. None of these are bad languages, but you can't write a perfect language, because languages are designed to be used day in and day out by people. I like Python and I've found that some of the things I really like about Python are the same things other people dislike about it.

PHP has a lot to like about it - otherwise it wouldn't be as popular as it is.

golfer(2817) 5 days ago [-]

So much of the web is powered by PHP. Wikipedia (MediaWiki software) and Facebook are two of the largest footprints of usage. And of course, WordPress. PHP has a lot of firepower behind it.

Shish2k(3822) 5 days ago [-]

The Facebook www code is written in Hack, which started off as a PHP-compatible language / stdlib, and then dropped PHP compatibility when it was holding them back.

Having worked with both, any one of the major additions (XHP, good type annotations, sane collection types, etc) makes my development experience 10x better; going back to vanilla PHP now just makes me sad...

paulcarroty(3808) 5 days ago [-]

Thanks, modern JavaScript is much more better & profitable than not_a_crappy_language_anymore today.

If you're 60, you can still use PHP for legacy projects, but no sense to learn it now and kill your career. Market matters.

megous(4089) 5 days ago [-]

As a seasoned PHP dev, I can actually see where nodejs sucks. Things that are native functions or part of default extensions in PHP, I have to hunt for on npm or be responsible for re-implmeneting from scratch. And PHP has a lot integrated.

It sucks if you know you can whip out a single call in PHP, but have to search for an hour in npm and not really find anything satisfying - all the while knowing that in PHP you'd already have been doing something more useful.

scandinavian(10000) 5 days ago [-]

    php > echo count(get_defined_functions(TRUE)['internal']);
They should clean up the global name space, but that will never happen, so I'll continue not using PHP.
jbrooksuk(3429) 5 days ago [-]

Why does the number of methods in the global namespace bother you?

* Genuinely interested in the rationale behind this. Sure, it's a lot, but why does that matter?

imtringued(10000) 5 days ago [-]

Yeah and they should add basic features like lambdas and rewrite the entire standard library to support that while they are at it and when they are done with all of that could they potentially provide a separate syntax similar to facebook's reason? It's no longer the same language anymore if you change everything and therefore there is no reason to waste time improving PHP.

noname120(10000) 5 days ago [-]

While parent's comment seems to attract some downvotes, there is a good point hidden beneath: PHP (mostly) keeps backward compatibility. This means that old and unsafe functions that should not be used anymore are not removed from the language and are still available to use.

Coupled with a lot of outdated documentation and tutorials that advocate using these functions, it becomes a big issue. For example, many tutorials use '==' when '===' should be the default. Deprecated functions are used in many tutorials as well.

mattferderer(10000) 5 days ago [-]

I think the easiest thing to forget is what made PHP popular was how easy it was to get started writing software that worked good enough. The barrier to entry was incredibly low. It caused a lot of bad code & gave PHP a bad reputation but a lot of good was also made from it.

PHP made a lot of careers for developers as they were able to fake it until they made it while learning how to code & providing significant value to a businesses.

While PHP is nothing like its former self, most people who developed web apps in the first decade of the century will probably always remember PHP for what it was.

gwbas1c(3828) 5 days ago [-]

The thing that I like about PHP is that I can mix HTML, code, and database queries into a single file.

This is incredibly useful when learning basic web development, even though it doesn't scale for more complicated applications.

It's also incredibly useful when trying to bang out a simple experiment.

pavel_lishin(221) 5 days ago [-]

> PHP made a lot of careers for developers as they were able to fake it until they made it while learning how to code & providing significant value to a businesses.

Unfortunately it also provided a lot of careers to developers who never got to the 'make it' stage.

StevePerkins(3835) 5 days ago [-]

On one hand, I do see that HN (and probably Reddit) live in a bubble. Where most of the darling languages and tools have very low use in the real-world. And where most of the languages and tooling that actually run the real-world go un-discussed, or simply dismissed, because they're boring and don't appeal to students and entry-level devs working on side projects. I realize that the perspective most rookies get from these forums is completely distorted.

On the other hand, I ALSO think that PHP is only still relevant because of WordPress. The business world runs on Java, and .NET to a lesser extent. I see job postings for Python all the time, as it and the JVM run the worlds of data science and big data. One can even still make a thriving living with Ruby. But I just don't see any recruiter activity around PHP at all. And whereas my Java shop will hire Python or Node junior devs, on the theory that they can learn, we would more likely to skip over a PHP-based resume (if we ever actually saw any).

phlyingpenguin(10000) 5 days ago [-]

I was just talking about with a student who got on to a PHP project with an older developer lead. We decided that PHP is a code smell for a poorly managed project. You know, the kind that were common in the early aughts that featured poor version control practices, no tests, no automation, live changes, live database schema, etc., etc. That's certainly not the way things have to be, but it's never surprising if it is. The root cause is the developer in charge and their reports, but PHP usage is an easy marker to check. We found most examples using other languages (you mention) to be generally better managed in our area/sampling.

bartimus(4096) 5 days ago [-]

I think the bubble has to do with purists vs pragmatists.

I once joined a Java team as a front-end dev (I'm full-stack). They would do the back-end. They had done everything by the book perfectly. Following all the best practices. But the back-end wasn't doing what it needed to do by a long shot. They were stuck. Meaning I couldn't make progress. So I cooked up a simple (temporary) PHP back-end so I could easily build the needed queries. All nicely secured with LDAP. I was literally running circles around their solution. But it was blasphemy.

Eventually they ported my code into Java. Some portions literally one-to-one. Nobody will ever know the critical role PHP played in this Java success story.

ndarwincorn(10000) 5 days ago [-]

You make a sweeping statement about a bubble and then demonstrate the edge of yours:

> The business world runs on Java, and .NET to a lesser extent.

Flip those.

> I just don't see any recruiter activity around PHP at all.

This is probably true. My own anecdote: I'm at a primarily-PHP SaaS provider in a major tech hub, and for the couple hires we've made in the last year all of our applicants with extensive PHP experience said we were the only PHP shop they'd talked to, many of them over a year into their job search.

cbdumas(4036) 5 days ago [-]

Often when there is a post about PHP I read over these treads sort of subtly trying to figure out how a language with such a large user base has remained basically invisible to me. And I don't think I live in the HN bubble, though I do use a lot of Python, in my consulting work I've also done plenty of work with .NET, JVM to a lesser extent and other 'boring' tech.

But PHP has literally never come up, even as a suggestion or as a tool that is a small part of the company stack or whatever. It feels like I'm looking into a strange parallel universe of software development reading through these threads. PHP devs, where are you?

foobarandgrill(10000) 5 days ago [-]

What's wrong with Laravel or Symfony?

saltminer(10000) 5 days ago [-]

> PHP is only still relevant because of WordPress

There's also Drupal, which is very common in higher ed.

wyqydsyq(10000) 4 days ago [-]

There's still lots of small-fish web development agencies who specifically hire PHP developers.

That's about it though, I think the web development industry is specifically tied to PHP because of their excessive dependence on WordPress, Drupal etc, this also produces developers who are overly specialised in niches using these platforms at the expense of general capability. Think back to the mid-2000's where there was a big difference between a JavaScript developer and a jQuery developer - while they both wrote JS the latter was generally incapable of using vanilla JS proficiently because their use of jQuery's abstractions as a crutch impaired their learning of the underlying fundamentals. A lot of 'WordPress developers' are so heavily specialised in using WordPress they probably can't even remember how to write a vanilla PHP site.

nikdaheratik(10000) 4 days ago [-]

Well I've been gainfully employed for pretty much my entire career using PHP and I've never had a problem with recruiters. It really depends on the city, the companies in that city, and the overall market in your region.

Big business runs on Java/.NET, but small-midsized businesses run on a variety of different platforms partially based on the history of their IT department and their development needs.

The bottom line is that you can write a good greenfield web application for a midsized business faster, with less overhead, and with fewer headaches using PHP and a good framework than you can with Java or any of those other tools you mentioned. The tradeoff is that they don't integrate as easily with the huge backend software packages and may not scale as well as Java and .NET. Which is why huge businesses don't run with them as much.

IloveHN84(10000) 5 days ago [-]

Isn't Facebook running on HipHop PHP? What about Zalando, one of the biggest platforms out there? They started using Magento and tuning it

Historical Discussions: Can we all stop using Medium now? (May 16, 2019: 894 points)

(894) Can we all stop using Medium now?

894 points 3 days ago by squiggy22 in 1881st position

www.webdistortion.com | Estimated reading time – 3 minutes | comments | anchor

Medium is cancer. A trojan horse. It's Facebook. But for blogging. A walled garden behind which all your favourite content lives, and yet you are forced to login via their shitty UI, or worse still pay for access.

When did reading stuff on the web become pay to play?

I'll tell you when. When we all got greedy looking for the attention our blogs used to get until social media came along. Or when we got greedy looking for coinage off the onetruecontentsourceTM. If you enjoy reading on the web chances are you've been forced to login to Medium at some point in the last week.

Have any of you noticed the dark UI patterns. I bet you did. I'm guessing you noticed smart programmers / designers / internet celebs tweeting about how shit it is. Maybe someone you respect continually pointed out their accessibility failures. Maybe you nodded your head in agreement. Maybe you even retweeted it. Go you.

Just so we are clear. Medium takes your content, rolls it up into a pretty SEO friendly package for themselves and sells it. Oh, and turns us all into seals waiting for someone to throw us a fish in the process. If you are lucky, you might even get a cut. You know. Like the sort of cut artists get on Spotify. Profit share I think the cool kids call it.

So why is everyone still publishing on it? For what? More eyeballs? More attention? More reach?

Balls to that. If only one of you read this I'll be happy. At least I own my own platform and I'm not being controlled by some monolithic publishing giant that can do whatever they want and sell whatever advertising they please ON YOUR HARD WORK.

Please. It's 2019. Learn to market yourself and your content. Quit being lazy waiting for Medium to do it for you. OWN YOUR PLATFORM.

Oh, and don't get me started on the mini publications that we all jump for fucking joy over being published on. How's about no. How's about we starting taking responsibility for the absolute decimation of the traditional web.

You know. That old internet that used to be open and free.

All Comments: [-] | anchor

silveroriole(10000) 3 days ago [-]

Seems like not many commenters are giving a perspective on why people might like Medium more than going to a personal blog. Here's mine.

As a reader, I usually don't care who wrote the post I'm reading. I also don't care about seeing more content from the same person, which is all I would get on a personal blog; in fact, I would prefer to see related content but different viewpoints from other people, which is what Medium shows me. I want to read them all in the same format, not hop between different blog layouts. I want them all to have consistent like/comment mechanics. As a reader, sorry, I just don't care about your personal brand or platform. I want to easily read a lot of stuff by different people on a topic.

Never used it as a writer, but why should individuals have to set up their own SEO, comment system, blog system, etc just to post an article? Because they think that they're so important or controversial that they need a personal space/brand? For most writers I think you get a lot for free with Medium.

jesseschalken(10000) 3 days ago [-]

> I want them all to have consistent like/comment mechanics. As a reader, sorry, I just don't care about your personal brand or platform.

This is an important point. There is a small mental burden users face every time they see a new website and have to understand how it has been laid out and where things are and what they mean. Medium gives users 'just the content' in the same familiar layout to streamline the process.

Not to mention the sites that have silly disorienting scrolling effects, with things moving around while they slowly load, or are barely even functional on mobile. Although these tend not to be problems for the average blog.

kgraves(3682) 3 days ago [-]

Agreed, given the choice between Medium and a custom blog if I am a publisher, Medium is much faster to get up and running with a blog and has a larger reach than me setting it all up myself.

Many people who aren't technical don't want to have to manage rolling their own comments system, custom SEO, analytics and all of that stuff.

fbn79(4123) 3 days ago [-]

To make a parallelism with traditional paper it's like to say that everyone want to publish an article must become an editor and found a newspaper. That's not realistic. Medium and other platform put a lot on work on promotion, positioning and creating an engagement place for writers and readers. I don't think that every writer must became an editor in digital era. It's like to say that if you are a musician you have to host your clone of spotify and bypass all distribution platforms.

omarchowdhury(2428) 3 days ago [-]

Medium doesn't provide editing services, nor do they promote your content. Your content promotes itself by merit of engagement with readers.

Grustaf(4131) 3 days ago [-]

Seems like you can have the cake and eat it too by posting to Medium AND your own domain, or wherever you want.

slig(1143) 3 days ago [-]

Do you mean using your own domain on a Medium blog? If so, that was disabled a while ago. Or do you mean cross posting the same content on Medium and your blog? That's tricky, how do you tell Google that it should consider your blog as the owner of the content? Otherwise you can get punished for duplicated content.

dmje(10000) 3 days ago [-]

There's quite a lot of mileage in what this guy says, and the 'content behind walled garden' thing pisses us all off (I'm a non-Facebooker for All The Usual Reasons and have the same response whenever I see stuff 'published' there).

But: noone seems to have mentioned the fact that you can publish on WordPress and use the Medium plugin to copublish to Medium. It deals with the whole canonical content / SEO thing, you get to keep your original content, and you get the potential benefits of a Medium audience / stats / etc.

Best of all worlds.

aaron_m04(10000) 2 days ago [-]

At any time, Medium can decide to require login to view articles, if they think it is in their best interests.

Anything you post on Medium is subject to their whims.

vermilingua(3207) 3 days ago [-]

If someone can find your work on medium, why would the bother to find your site and read it there instead?

ductionist(1758) 3 days ago [-]

That's interesting - does the canonical tag identify the Medium article or the WordPress article as canonical?

bwasti(3675) 3 days ago [-]

I wrote one article on Medium. I ended up writing my own little version using open source markdown rendering to see how hard it would be to roll my own, https://jott.live

From what I found, there is a lot of stuff Medium does well that is hard to recreate by one's self. Super simple features, like

- rich text editing that isn't ugly

- caching what you're currently working on

- tracking views and who has viewed your article

aren't easy to build and generally aren't worth it unless you write many many articles.

Although I don't like the aggressive account on-boarding and payment models for reading articles, Medium certainly makes life easy for writing articles.

luckylion(10000) 3 days ago [-]

You don't necessarily have to build it yourself to have control over it though. I mean, WP isn't the dream you've always looked for, but it (+ a plugin or five) can do that and is pretty much set up in an hour. With wordpress.com, you'll get that without running it yourself.

chrshawkes(4108) 3 days ago [-]

They use Quill.js - rich editing (you can use this in your project easily)

Caching/saving work just simple AJAX calls to a localized database or filesystem which can store the JSON data that Quill.JS creates.

Google Analytics is free and offers more customer insight and analysis opportunities and it's as simple as a script tag inclusion in the html.

I plan to show how to replicate this website in a matter of minutes on my YouTube channel.

bubblewrap(10000) 3 days ago [-]

Wordpress has an app store for plugins. I installed a popular statistics plugin and it seems to be doing well. I think it is pretty amazing, actually (the app store). https://wordpress.org/plugins/

Hosting my blog costs me 5€/month, though. Not sure if I have the cheapest option. On the upside, it also supports multiple email accounts, so I am in the process of moving all emails (also family) over there, saving on extra costs for email.

I think attracting viewers is the only real issue. That's why social media, Tumblr and Medium took over from blogs. Because they have the recommendation and sharing features that bring viewers to people's postings.

skilled(1257) 3 days ago [-]

I have yet to figure out the 'persona' for Medium users.

From time to time you can come across some decent content, but the loops and hoops you have to go through to read it are simply not worth my time.

StevePerkins(3835) 3 days ago [-]

It seems to mostly revolve around resume-enhancement spam. The sort of empty filler that people write, just to appear like a 'thought leader' when potential employers or clients google them.

There are two categories of resume-spam:

1. 'Sunny' filler that doesn't really say anything new (e.g. 'The Future Is Serverless'). The preferred platform for this seems to be LinkedIn.

2. 'Edgy' filler that doesn't really say anything new (e.g. 'Serverless Considered Harmful', or 'Why I Don't Use Serverless'). The preferred platform for this seems to be Medium.

dickeytk(3231) 3 days ago [-]

The main argument in this article doesn't apply unless the Medium writer enabled the partner program on the post. That's what enables the pay wall.

troydavis(424) 3 days ago [-]

A huge percentage of the authors behind the paywall don't realize that they are. It seems like Medium might have made this option less clear than it needs to be be (like by describing the benefits but not the drawbacks) or made it on by default.

pwncake(10000) 3 days ago [-]

Is there a suggested alternative people should be turning to as opposed to building their own, especially if their expertise is in writing or some other field than computer science or web design?

CM30(3210) 3 days ago [-]

WordPress/Ghost on their hosting? Or for the completely non technical, the cloud hosted versions of said scripts on WordPress.com and Ghost.org?

hombre_fatal(10000) 3 days ago [-]

Though I also kinda resent the idea that, just because I have the skills to build my own blog from scratch, I'd want to waste even one second building it, setting it up, configuring it, maintaining it, fixing it, implementing basic features that other platforms already have, etc.

recroad(4124) 3 days ago [-]


Hackbraten(10000) 3 days ago [-]

Please re-enable the `-webkit-overflow-scrolling: touch;` CSS style for your content. The article is a pain to read on Mobile Safari because bouncy/rubber-band scrolling is disabled.

pokstad(4124) 3 days ago [-]

Goes without saying, the consistent Medium experience is a good reason why we keep using it. Self made blogs like this that do annoying things that take away from content are foot guns.

skrowl(3974) 3 days ago [-]

Mobile Safari is the new IE6.

Hopefully that class-action against Apple for the app store monopoly bears fruit (pun fully intended) so some alternative browsers can be installed on iOS (via Amazon app store or something similar to f-droid).

Note for those new to the party 'Chrome' and 'Firefox' on iOS are just skins on top of Safari, iOS rules require you to just use Safari if your app browses web. See 2.5.6 here if you don't believe me - https://developer.apple.com/app-store/review/guidelines/#sof...

NightlyDev(10000) 3 days ago [-]

That feature has serious bugs on iOS, and that's probably why it's not the default as Apple can't get it to be stable.

Don't ask him to add it, ask Apple, it's their fault. It's not a production feature.

And since it's Apple, you can't just use another browser as it's all the same crap on iOS.

You're stuck with something worse than the 2019 equivalent of IE6. iOS is a joke when it comes to web browsing.

deathanatos(10000) 3 days ago [-]

Did the author change the site? You're not the only person in the comments complaining, so I presume y'all aren't making this up, but the website doesn't appear to mess with that style. (Chrome doesn't report it on either <html> or <body>.) (and it seems normal to me in iOS, but I'm normally an Android user.)

musicale(10000) 3 days ago [-]

This should be medium link.

Lowkeyloki(4121) 3 days ago [-]


I dare the author to publish it on medium.

Triple. Dog. Dare.

melling(1606) 3 days ago [-]

It's halfway through the month and I'm blocked by Medium. It is a bit frustrating.

I'm not sure about this comment though:

"Learn to market yourself and your content. Quit being lazy waiting for Medium to do it for you."

Maybe I am lazy but there really isn't an easy and inexpensive way to market your blogs or apps.

Deimorz(3637) 3 days ago [-]

Just find a link to any article you want to read by searching Twitter (or even tweet it yourself). t.co links always bypass the paywall.

bscphil(3900) 3 days ago [-]

You could buy advertisements... what you're looking for is basically a way to get ads for your product in front of eyeballs, even if you don't call them 'ads'. I don't see any reason why that ought to be cheap.

Maybe I'm misunderstanding what the goal of 'marketing' is in this context, however.

jolmg(10000) 3 days ago [-]

> Maybe I am lazy but there really isn't an easy and inexpensive way to market your blogs or apps.

HN, Reddit, and other social networks?

phantom_oracle(1450) 3 days ago [-]

The authors complaints are valid, but fails to acknowledge that besides Medium, so many great(and not so great) conversations happen on platforms that are now locked behind login gardens. It was fine back in the day to just create a throwaway FaceTwit account but now they demand mobile numbers too.

Pre social media, how did people discover great content? You can go as far back to the 90s or very early 00s as a time reference

azimuth11(10000) 3 days ago [-]

That's a good point. It was much harder to find interesting stuff to read back in the day. I had my Google Reader with a bunch of blogs, but that ended up being pretty noisy and they took it away. You had to do the searching yourself and be your own data curator, or have a network of other friends pitching in content to check out (eg. forums, Maybe digg?)

It's locked in a somewhat growing, walled garden, yes, but Medium is now basically the Google Reader for a lot of people. Those people don't care about logins if they can get a good read every now and then. And, when Medium goes to crap, some other platform will pop up with no ads (for a while :) ) and lure us all over there.

sgarman(4061) 3 days ago [-]

I know this doesn't justify all the issues but I'm at least a little excited that someone is trying to create a media/news service that isn't ad based. Seems like we have a way to go but at least this is a start.

I'm a little surprised that there are so many complaints here - I bet a lot of the same people complaining also complain about ad based services and now they create news / journalism / articles designed for clicks and not quality content.

mojuba(3443) 3 days ago [-]

Ever since we switched our app from ads to ad-free soft paywall with daily limited content, we get a lot of hate and negative reviews on the App Store from our users. Some even said quite directly 'bring back the ads'.

This is so strange. What this kind of people don't understand is, we all buy products whose prices include advertisement costs. Eventually we pay for everything anyway, there is no charity (unless something is explicitly charity). Let alone cases where ads make us buy things we don't need. And not to mention the rabbit hole behind the seemingly free services like Google Search/Maps.

I don't get all the hate here. Paywalls are at least honest, clear and upfront about how the business makes money. We should support this type of businesses. It's usually the switch that triggers a lot of hate: businesses should probably be cautious about it.

inapis(4043) 3 days ago [-]

The internet suffers from massive cognitive dissonance. Forums like HN/Reddit just amplify it. People complain about ads, People complain about subs. Fortunately, people who want to pay for content will pay for it as long as you express the value clearly. And people who complain about both are probably a minority.

bitL(10000) 3 days ago [-]

We need a Zeitgeist service returning Overton window for each company and automate hiding articles/content that are no longer fitting inside it.

markn951(10000) 3 days ago [-]


55555(4082) 3 days ago [-]

I was writing on Medium because it ranks well in Google. I wrote nearly a dozen really great articles on primarily health and dietary supplement topics. After 2 months they started to rank well and were getting daily readers, as the content was really great. Then I wrote an article on a 'research chemical' and they banned my account overnight. I lost the final edited versions of all content. They did not send me a zip with the content. They could have simply deleted the offending article(s) but instead they deleted all of them.

The thing is, the thing I wrote about isn't illegal. When you write articles on Medium keep in mind you're writing on someone else's website and they don't give a damn about you. You are subject to their opinions about what is appropriate and what isn't. I have no doubt if an alt-right voice wrote on Medium and were controversial enough in their views they'd be deplatformed.

But also the way they handle it is just rude. Fuck them.

auct(10000) 3 days ago [-]

In google search cache or in web archive cache can be your content.

house_atr(10000) 3 days ago [-]

'I have no doubt if an alt-right voice wrote on Medium and were controversial enough in their views they'd be deplatformed.'

What's the problem, again?

xkcd-sucks(3210) 3 days ago [-]

Haha you now know how they maintain their SEO -- by purging the politically incorrect stuff.

BTW only druggies say 'research chemicals', academic researchers just use 'drugs'/'compounds' or maybe 'fine organics' or the actual name of whatever it is. 'Research chemicals' is basically 'SWIM' in terms of being a 'druggie heuristic'

calimac(10000) 3 days ago [-]

Your story represents the dilemma in the shit internet of 2019 where most people are content slaves locked into plantations. Medium, facebook, twitter, et, al. don't think twice before they ban human minds from expressing themselves for any reason. All these corporations hide behind $600 an hour attorneys and pr management teams. Fuck them all. replacefacebook

sci_c0(10000) 3 days ago [-]

Hi, wanted to offer a little help! You can still use The Internet Archive (or the WayBack Machine, as it is popularly called) [https://archive.org/web/web.php] to find and download your articles.

It is very helpful and I once used it download some articles/pages for my friend from her blogging website. The website had got blocked as it was a paid domain, which she purchased from GoDaddy and her 1 year subscription got over.

thefounder(3715) 3 days ago [-]

>> I have no doubt if an alt-right voice wrote on Medium and were controversial enough in their views they'd be deplatformed. But also the way they handle it is just rude. Fuck them.

Well fuck alt right too! They can get their own website after all. Why should Medium or any other company have to host their shit? People act like they have some kind of right to have their crap hosted by someone else. Companies are not public services after all.

the_other(10000) 3 days ago [-]

> I was writing on Medium because it ranks well in Google.

This highlights one of the ways Google has gone off-piste. The host or platform shouldn't affect your ranking/SEO this much. The same article on two platforms show rank side-by-side when searching by content. There's no reason to trust Medium above other platforms.

chdaniel(2825) 3 days ago [-]

I understand your POV, at the same time it was free to sign up, no time investment, no yearly costs for you etc. I know how it is to be banned overnight for something that you didn't expect (and without being malevolent).

I myself write my articles on my blog and then copy them over to Medium. A bit more work but might pay off in an event like this one

apatters(3799) 3 days ago [-]

> I was writing on Medium because it ranks well in Google

Is this the main/only reason people blog on Medium, or are there others? It seems like something that would be easy enough to develop an open source, self hostable version of, but with Google now deranking small, independent sites, Medium would be hard to beat from an SEO standpoint.

KingMachiavelli(10000) 3 days ago [-]

I wonder if anything has changed since GDPR passed as I am pretty sure you are suppose to be able to download your content but I'm not sure how that works when accounts are deleted and/or banned.

Regarding your situation and concerns of other comments...

Many people in this thread point out that marketing and creating an audience is difficult work. But I think obvious-in-hindsight solution is to always keep a backup of your own content no matter what platform you publish it on. I'm not sure how Medium interprets their rules exactly but I would be suprised if you couldn't at least link every post back to a mirror on your own blog which would let you both generate an audience and prevents you from loosing everything is medium goes nuclear on you.

As always, back up any content that you care about. Don't just backup your own content but also that of others - ever relize a youtube video you like has been taken down by a bogus DMCA claim?. Online services may have their own redundency but that doesn't matter if they decide to delete (your) content.

dingo_bat(3915) 3 days ago [-]

> The thing is, the thing I wrote about isn't illegal.

Even if it were illegal that wouldn't be a reason to delete it.

manub22(10000) 3 days ago [-]

It's better to create your own blog, and get traffic overtime. Why they earn money on your content?

Jerry2(93) 3 days ago [-]

>When you write articles on Medium keep in mind you're writing on someone else's website and they don't give a damn about you. You are subject to their opinions about what is appropriate and what isn't.

Same can be said about other platforms like YouTube etc. When you produce content for all these mega-platforms, you're nothing more than a sharecropper... a digital one. [0]

You really have no rights and you're at the whim of the 'feudal lord' who doesn't care about you and can take all your work away in an instant. Since there's thousand others that will take your place, they really don't care about what your losses are. They hold all the power and you have absolutely no chance of remedy.

[0] https://en.wikipedia.org/wiki/Sharecropping

ozzmotik(3886) 3 days ago [-]

just out of curiosity, which chemical did you write about?

superconformist(10000) 3 days ago [-]

But how will I 'clap' for top-shelf webshit content if it's not on Medium?

bubblewrap(10000) 3 days ago [-]

Isn't there a 'clap' plugin for Wordpress?

divanvisagie(4043) 3 days ago [-]

Potential violation of the prime directive...

IloveHN84(10000) 3 days ago [-]

Funny how from 'All to Medium, ditch WordPress and personal blogs' now people want to go the way around because of metered views and other limitations imposed by Medium popularity.

I say it since ever: own your own content.

pergadad(10000) 3 days ago [-]

But you don't really own your WordPress. No way to get around using their services if you want to have proper spam filtering.

ajflores1604(10000) 3 days ago [-]

So just my personal take, I definitely benefit a lot from Mediums recommendation system, however it might work. Probably because I'm brand new to programming, and even fields like machine learning are very fresh to me even though it seems like everyone and their mom already knows the basics. So having a system that can see that I looked up a specific topic and recommend similar articles from that domain. A lot of the time I don't know what I don't know, so having this recommendation system helps illuminate the domain a bit more for me. And the general style of medium where the posts can be a more casual overview of a topic compared to the deep dive of a whitepaper, helps me grasp concepts faster. The recommendation aspect also helps me explore domains in a more focused way compared to the random nature of social networks that just bubbles up whatever's popular. I'd be lying to myself if I said I haven't benefited from Medium.

Not trying to excuse them for what they're doing, just trying to point out from my personal experience the value they've added to me and why following personal blogs or places like reddit, which I see mentioned as alternatives, isn't a one to one replacement for me. I don't know what or who I'd need to follow if the domain is new to me.

I guess continuing on this train of thought, and not wanting to complain without putting out an idea for a solution...if something were to come in and be a viable alternative to Medium it would have to

>Allow for easy publishing (editor)

>Have reliable distribution/hosting (network)

>And have some way to explore related topics or publications (discovery)

For publication I think the community would have to come around to a standard format. Like a simplified latex, or something similar to it, with an approachable interface that even my Mom or Dad could write something up without needing to read documentation.

As for distribution, the only thing I can really think of is an ipfs style network. Similar to how torrents can provide some security in the survival of a file even if the originator decides to stop seeding (hosting). And also similar to torrenting, I can see ppl willingly giving up resources for 'the cause' if they themselves benefit enough from the network existing.

The only thing left, that also contains all the things I liked about Medium, is the discovery aspect. Also seems like it'd be the most difficult to implement on a distributed network. Maybe a few designated community servers, similar to tracker servers from the torrenting analogy, carry the information of what files are in the network. Im not sure exactly what the ipfs spec implements for this aspect of file discovery. But it seems some sort of designated 'discovery nodes' would be necessary. Maybe graph network nodes that ppl can query using their own discovery algorithms or ones shared within the community? Idk how well those would scale, I've heard from the Neo4j pitch that Behance rolled over their infrastructure to neo4j from cassandra and reduced their server requirements by a factor of 10. Maybe that kind of efficiency would be enough to support the network in general with a minimal number of critical nodes?

I haven't worked on an infrastructure level with anything I've mentioned, mostly just know of the technologies, so I might be completely off base with the individual things I proposed, but I feel like the general concept is worth dissecting.

luckylion(10000) 3 days ago [-]

> A lot of the time I don't know what I don't know, so having this recommendation system helps illuminate the domain a bit more for me.

That's a very good point. Before the advent of Google, we used to have curated link lists for topics that you could explore. It's become rare, and it's probably harder to do now that everything moves much faster (or maybe it doesn't and I've just gotten much slower).

tyingq(4088) 3 days ago [-]

Large.com is available for $850k if there's a VC thinking they can disrupt.

kowdermeister(2479) 3 days ago [-]

- So what's your USP?

- We'll use bigger font size... and feature more clickbait headlines.

jaza(3964) 3 days ago [-]

Would you like fries.com with that? It's for sale too.

regnerba(4131) 3 days ago [-]

I don't publish anything ever really so I don't understand the appeal of using Medium. Can anyone explain it to me? Is it just the ease of use? Write some text, click button, it's available online?

Aeolun(10000) 3 days ago [-]

Originally it was their easy editor. Now I just don't know, since every other blog copied the editor.

dickeytk(3231) 3 days ago [-]

I used to write on medium. The editor is the best. Nothing I've used is as good. The stats are also incredibly well done.

Some people are saying authors use it for the built-in audience. I can't say that was an appeal for me—or that I got very many readers from within medium anyways.

I started my own blog with hugo/netlify (though notably I don't post things very often). The main reason I switched is because so many people enable the pay wall on medium I feel the medium brand is hurting my own now, even though I've never enabled the pay wall myself.

I definitely miss that editor—and not having to update the site's dependencies.

ukulele(10000) 3 days ago [-]

Built-in audience of readers

wtmt(4124) 3 days ago [-]

Side note. I didn't read this post for two reasons. On mobile, the lack of inertial scrolling makes it like pushing hard against some massive slush, and the lack of a scroll bar prevents me from knowing how long the article is and whether I should read it quickly now or check it later.

A criticism on Medium, including its usability (I presume), should do a lot better if it wants to be read.

SCLeo(10000) 3 days ago [-]

I am mobile and I have inertial scrolling. I think it is probably because you are using iOS safari.

iOS' inertial scrolling for some reason breaks very often. It happened to my website a couple times, and I had to fix with a bunch of css hacks.

Supporting iOS is just a pain. Sometimes, I would even rather support IE 11 than iOS.

(btw, all iOS browsers are forced to use safari render engine.)

stanislavb(2965) 3 days ago [-]

What is your best alternative? I'd say dev.to is a good one for the context of software/programming related content

julienreszka(3941) 3 days ago [-]

Blogger is just fine

coldtea(1190) 3 days ago [-]

>When did reading stuff on the web become pay to play?

At the moment authors wanted to get paid. When did somebody else's writing became a free for all?

If, as you say, that [Medium] is where 'all your favourite content lives' (which means you appreciate the content), then it's troubling that you don't want to pay for it.

>Just so we are clear. Medium takes your content, rolls it up into a pretty SEO friendly package for themselves and sells it. Oh, and turns us all into seals waiting for someone to throw us a fish in the process. If you are lucky, you might even get a cut. You know. Like the sort of cut artists get on Spotify. Profit share I think the cool kids call it. So why is everyone still publishing on it? For what? More eyeballs? More attention? More reach?

All of the above. More eyeballs. More reach. More attention. A primed audience looking to read something. And also some authors make decent-ish money (far more than Spotify) on Medium. So there's that.

>Please. It's 2019. Learn to market yourself and your content. Quit being lazy waiting for Medium to do it for you. OWN YOUR PLATFORM.

Yeah, and how does that work for you? We've only read your blog because it was picked by an aggregator (HN).

teh_klev(1979) 3 days ago [-]

> We've only read your blog because it was picked by an aggregator (HN)

But then I only read articles on Medium because they're picked up by an aggregator (HN). After I've read the article I rarely find any of the suggested/related articles of any interest. I never head over to Medium to find stuff to read because I find quite a lot of the content is just puff, fluff and bad writing. It's kinda like Quora, mostly uninteresting, annoying to use but has a rare gem now and again that got linked to in a news aggregator.

spiderfarmer(4113) 3 days ago [-]

I agree. Publishing things online is easy, if you have the knowhow. If you don't know how to set up your website or blog it is hard. And gaining an audience and eventually getting paid for all your work is very hard.

Medium makes these things easier. At an expense, I agree. And there might be better ways for them to do that. And there might be better alternatives, but the reason Medium exists is because they fill a need.

nhumrich(3992) 3 days ago [-]

Except the problem is that 99% of authors on medium dont get paid. So it's not really about authors wanting to be paid.

aerovistae(2851) 3 days ago [-]

Ironically this page doesn't work on mobile. It doesn't have inertial scrolling, and I just have no patience for that. I stopped reading after the first paragraph.

aembleton(4091) 3 days ago [-]

Works on Firefox 66.0.5 on Android 9

_seemethere(10000) 3 days ago [-]

Not everyone has the time to create their own platform and curation network. That's the reason why people will keep using Medium.

We need to stop acting like it's easy to build these types of platforms.

wmf(1979) 3 days ago [-]

I think the point is to reject the concept of 'platforms' and 'curation networks' completely, not to recommend that people try to create their own Medium equivalent. Just let your blog be a blog; you weren't going to win the social media lottery anyway.

sdan(3632) 3 days ago [-]

Use Ghost. It's free, given you have a server set up.

robjan(10000) 3 days ago [-]

I don't buy the curation argument. I see a lot of Medium articles published on Hacker Noon/Free Code Camp and then posted to Hacker News which remind me of the PHP tutorials that we always complain about.

chrshawkes(4108) 3 days ago [-]

I feel I could make a sufficient replica with free open source tools in a matter of days, maybe hours.

Apocryphon(2710) 3 days ago [-]

WordPress, Blogspot, even Xanga and LiveJournal predate Medium and didn't have to resort to subscription services.

regnerba(4131) 3 days ago [-]

The struggle for me in understanding why people use Medium is that I am a reader of Medium articles, but I only find them when posted on Reddit, HackerNews, or when they come up on a Google Search. So as far as I can tell it wouldn't matter if it was posted on Medium or a personal blog on their own site.

tonystubblebine(3663) 3 days ago [-]

I almost always end up posting in the comments when I see Medium on the front page of HN, basically the same thing every time.

I've found the Medium experience to be quite good, and even world positive.

I'd been running a self-improvement group blog as an ancillary initiative to the rest of my business. The blogging was kind of cool, but not successful enough to think much about.

Then Medium asked if we'd be interested in professionalizing what we do. Medium's CEO and I both worked for the tech publisher O'Reilly early in our careers, so I think that's why he thought we could pull it off.

And so I've gotten to really experience the before and the after of Medium's paywall. Before professionalizing, publishing seemed barely worthwhile. And it only was worthwhile if I could make the posts viral enough and the call to action catchy enough. That's not really my MO, which is why we struggled.

Medium's CEO has made the case that free content has been deeply corrupted by these marketing needs. Maybe some people can opt out, but I wasn't able to. I absolutely was cutting short my effort as a writer and then manipulating the start and end of articles to serve my marketing goals (otherwise, I couldn't justify the time).

In the new system, we just write differently. We know the article is the product people pay for and we don't need to corrupt it with any secondary marketing goals.

I see this as a world positive, where Medium has been able to create an ecosystem that allows for deeper and more authoritative articles. If you're reading self-improvement articles on Medium, a simple judge is to ask yourself if the author has any 1st hand experience. The vast majority of the free side of that topic on Medium is written by content marketers who are experts in virality but are basically just making up or cargo culting the advice. (Literally, much of it is farmed out to Upwork)

Part of what drew in our subject matter experts was enough money to be worth their time. We're going to send more than $100k to authors this year (probably a lot more).

I'm trying not to jump in here to market my own stuff. What I'm talking about above is our self-improvement publication. We're also testing two more pubs on different topics, which I think says something about how lucrative we're finding the editing. But it's too early for me to say how those are going. I have a number of other biases here (small amount of Medium stock, Medium's CEO was on my board for a long time and was my boss in 2005), but I'm hoping people see my actions, which are to double down on Medium over and over again, to be an indication that I'm a true fan.

gurlic(10000) 3 days ago [-]

I'm currently working on an alternative to Medium. So far it has all the elementary things like a working editor, notes and highlights, a decent commenting system. There are publications too, with teams, submissions etc. User profiles and publications have custom domains, custom CSS for branding. There's also a somewhat half-working Github integration for advanced writers but I'll need to work on it a fair bit to polish the edges. At some point, I'd ideally want to open-source the editor/bloggy bit and have people self-host it and perhaps push their content back to the platform for centralized distribution if they want.

It's still a work in progress and I'm trying to figure out some sort of strategy, especially a content policy around not allowing clickbait and spammy 'how to use learn redis in 3 minutes' type articles. I'd probably have to cap the max number of users and publication to a few thousand since I don't have the resources for this to be anything more than a hobby project.

I'm aware of a couple of others working in the same space. Write.as comes to mind, thought I don't think they're doing publications etc.

sdan(3632) 3 days ago [-]

As a person who blogs on certain areas in ML, the sole thing I look for in any platform is SEO. I tried going on Medium, but because of the lack of LaTex support, I had to go on my own blog, but I'd really like if there's an alternate to Medium with amazing SEO (as you explained).

empath75(2162) 3 days ago [-]

Writing a blog engine is trivial. Monetizing it is hard.

mceachen(3439) 3 days ago [-]

From one developer in a crowded marketplace (mine is photo management software) to another (you're describing a CMS), I'd spend some quality time researching open and closed source alternatives, and have a ready answer for how your product's features differentiate you from the crowd.

Make sure you also check out WordPress, Ghost, GatsbyJS, Hugo (and then expand the search by those terms + ' alternative').

Good luck!

jscholes(2756) 3 days ago [-]

> If you enjoy reading on the web chances are you've been forced to login to Medium at some point in the last week.

I've never once logged into Medium. I see a 'Pardon the Interruption' notice every time I want to read something, but I just hit Escape and move on with my day. If I had to pick a side, I'd probably say avoid Medium. But I don't know under what circumstances it forces you to log in just to read something.

larkeith(4102) 3 days ago [-]

Yet another reason to disable Javascript - no awful login popups.

teej(2385) 3 days ago [-]

The author has complete control over it.

Semaphor(10000) 3 days ago [-]

> I see a 'Pardon the Interruption' notice every time I want to read something


I stopped minding medium (as a reader) since installing that.

michaco33(10000) 3 days ago [-]

I've been doing this too. I'm also doing this with NYT, WSJ, FT -- all publications behind higher and higher paywalls, all publications I tried, but ended up leaving because they still serve you ads on their mobile apps.

Now the question is this: if I didn't care or it wasn't worth reading, why did I click on it in the first place?

Perhaps I don't care about this content as much as I thought anymore. Maybe we've been addicted to reading content, rather than actually making use of most of the content anyway.

PS: I'm trying out The Guardian now. No ads for premium users on mobile.

throw007(10000) 3 days ago [-]

I'm writing on Medium because its rank well on Google. I have several articles about the product I'm selling that rank 1-5 on Google and it gives me a lot of customers.

It'll take me a lot of time to build my own website and content that gives me the same result. So, there's that.

Sorry, not a native speaker.

regnerba(4131) 3 days ago [-]

No need to be sorry. Your English is fine and what you said makes perfect sense. As someone who doesn't post on Medium this is great in helping understand why people use it. So thank you for sharing.

snazz(3529) 3 days ago [-]

That is an important feature. Not having to convince Google that your domain isn't a spam cesspool is a major advantage of most any centralized blog/CMS platform (but especially one as major as Medium). Write.as doesn't seem to work as well for this purpose yet, unfortunately.

And your English is plenty easy to understand, no need to apologize about that.

yingw787(4043) 3 days ago [-]

Your English is great, and it's great that you prioritize shipping over worrying about which blogging platform to use. Shipping is very important.

That being said, I do think a small investment into moving onto your own platform may be worth it, and that might kick in much earlier than you'd think. I use Hugo and a fork of hugo-minimo-theme for my technical and personal writing, and it took maybe three days to set up (granted I have some technical experience). If you paid a contractor to spend a week setting up a statically compiled site with a CMS, you may get competitive SEO without having to worry about content licensing or platform updates. I think statically generated content is generally friendly to search engines. I don't update my blog infra at all really, and there's very few steps involved if I needed to relearn how to do it. It is possible to use free open source software and have it get out of your way in terms of making money.

markfer(4127) 3 days ago [-]

Any tips on replicating the same results? Would love to hear more

nullwasamistake(10000) 3 days ago [-]

Not sure if this is due to medium or not. Back in the day I used to write articles on business consulting focused on high value keywords. After 7+ years some of them still rank 3-10 on Google for the target keywords.

Medium is just easier to spin up than a domain + WordPress. I don't see any other advantages

Historical Discussions: ZombieLoad: Cross Privilege-Boundary Data Leakage on Intel CPUs (May 14, 2019: 852 points)

(852) ZombieLoad: Cross Privilege-Boundary Data Leakage on Intel CPUs

852 points 6 days ago by Titanous in 951st position

www.cyberus-technology.de | Estimated reading time – 13 minutes | comments | anchor

ZombieLoad is a novel category of side-channel attacks which we refer to as data-sampling attack. It demonstrates that faulting load instructions can transiently expose private values of one Hyperthread sibling to the other. This new exploit is the result of a collaboration between Michael Schwarz, Daniel Gruss and Moritz Lipp from Graz University of Technology, Thomas Prescher and Julian Stecklina from Cyberus Technology, Jo Van Bulck from KU Leuven, and Daniel Moghimi from Worcester Polytechnic Institute.

In this article, we summarize the implications and shed light on the different attack scenarios across CPU privilege rings, OS processes, virtual machines, and SGX enclaves, and give advice over possible ways to mitigate such attacks.


A short summary of what this security vulnerability means:

  • By exploiting the CPU's so-called bypass logic on return values of loads, it is possible to leak data across processes, privilege boundaries, Hyperthreads, as well as values that are loaded inside Intel SGX enclaves, and between VMs.
  • Code utilizing this exploit works on Windows, Linux, etc., as this is not a software- but a hardware issue.
  • It is possible to retrieve content that is currently being used by a Hyperthread sibling.
  • Even without Hyperthreading, it is possible to leak data out of other protection domains. During experimentation it turned out, that ZombieLoad leaks endure serializing instructions. Such leaks do however work with lower probability and are harder to obtain.
  • It is an implementation detail what kind of data is processed after a faulty read.
  • Using Spectre v1 gadgets, potentially any value in memory can be leaked.
  • Affected software:
    • So far all versions of all operating systems (Microsoft Windows, Linux, MacOS, BSDs, ...)
    • All hypervisors (VMWare, Microsoft HyperV, KVM, Xen, Virtualbox, ...)
    • All container solutions (Docker, LXC, OpenVZ, ...)
    • Code that uses secure SGX enclaves in order to protect critical data.
  • Affected CPUs:
    • Intel Core and Xeon CPUs
    • CPUs with Meltdown/L1TF mitigations are affected by fewer variants of this attack.
    • We were unable to reproduce this behavior on non-Intel CPUs and consider it likely that this is an implementation issue affecting only Intel CPUs.
  • Sole operating system/hypervisor software patches do not suffice for complete mitigation:
    • Similar to the L1TF exploit, effective mitigations require switching off SMT (Simultaneous MultiThreading, aka Hyperthreads) or making sure that trusted and untrusted code do not share physical cores.

If you have any questions about exploits like Meltdown/Spectre/ZombieLoad and their derivatives, their impact, or the involvement of Cyberus Technology GmbH, please contact:

Example Attacks

We present two example attacks that are both mounted on a browser as the victim process. The browser leaking its data runs in one Hyperthread and the adversary application disclosing the values runs as sibling thread on the same physical core.

URL Recovery

In this scenario, we reconstruct URLs that are being visited by the victim browser process.

An unprivileged attacker with the ability to execute code can reconstruct URLs being visited in Firefox.

In its basic form, the attacker has no control over the leaked data i.e., it is necessary to filter for interesting data. Hence, our adversary app searches for typical URL prefixes.

Note that, e.g., session cookies or credit card numbers follow predictable patterns in memory, hence represent a realistic target for such attacks.

Keyword Detection

In this scenario, we constantly sample data using ZombieLoad and match leaked values against a list of predefined keywords.

You should see a video of our POC exploit, but your browser does not support the video tag.
The adversary application prints keywords whenever the victim browser process handles data that matches the list of adversary keywords.

Note that the video shows a browser that runs inside a VM: ZombieLoad leaks across sibling Hyperthreads regardless of virtual machine boundaries.

Technical Background

In a nutshell: ZombieLoad is a transient-execution attack that observes the values of memory loads on the current physical core from a sibling thread. It exploits that the memory subsystem is shared among the logical cores of a physical core.

Simultaneous Multithreading / Hyperthreading

HyperThreading is Intel's implementation of Simultaneous MultiThreading, both are also usually abbreviated as HT and SMT. This section explains the value of HT/SMT for the performance and power efficiency of modern CPUs, and also why it imposes security risks that are exposed by the discovery of ZombieLoad (and similarly by L1TF/Foreshadow).

SMT boosts the CPU's instruction throughput by increasing the utilization of the independent execution units that exist within the pipeline. Already without SMT, the CPU architecture is capable of decomposing the instruction sequence into operations like loads, stores, and calculations. For different kinds of operations the CPU has different execution units (EUs). EUs that are in high demand are replicated. Operations that do not depend on each other can be processed in parallel by the corresponding EUs. The higher their utilization, the higher the overall performance.

In program sequences with too many dependencies between operations, a lot of EUs might end up idling. SMT further increases CPU utilization by running two threads concurrently on one physical core. Processing two instruction streams increases the probability of finding independent operations to assign to the available EUs that might otherwise sit idle.

Execution unit utilization without vs. with simultaneous multithreading

The diagram shows an example load of two threads that both individually do not fully utilize the CPU's EUs. Arrows show dependencies between the operations and blocks with alphabetic suffix model operations that take more than one CPU cycle. Complex arithmetic, for example, requires more time, and memory values are always not immediately available. With SMT enabled, the CPU's EUs can be fully utilized.

If SMT is enabled, the operating system sees two independent CPUs where only one physical core exists. Such logical cores each have their private architectural state, but they share most of their execution resources - which is one of the reasons why SMT greatly improves energy efficiency.

State Sharing between logical CPU cores in multithreading mode

The second diagram visualizes how some parts of a physical core's resources are shared between two logical cores if it is run in multithreading (MT) mode. If one of the cores is currently not operating (e.g., after executing a halt instruction) or Hyperthreading is deactivated, all resources belong to the only running logical core (see modes ST0 and ST1).

When the operating system schedules two threads from completely different applications on the logical cores of the same physical core, data of both applications is processed at the same time in the shared execution resources. ZombieLoad exploits this circumstance.

Transient Execution of Faulting Reads

CPUs maximize execution unit utilization by speculating when it is unclear what has to be done next: Conditional jumps that depend on values that are yet to be calculated are an example, because it cannot be known for sure in advance if the jump shall be taken or not. If the speculation is wrong, the pipeline rolls back all wrongly performed operations and ensures that none of their results become visible outside the CPU so that the system stays in a correct state. Otherwise, if the speculation is right (which just needs to be "most of the time"), the CPU's degree of utilization, and hence its performance, was successfully increased.

Instructions that are executed speculatively or out-of-order but whose results are never committed to architectural state are called transient instructions. Any fault that occurs during transient execution of an instruction is handled when the instruction retires, the last pipeline stage.

If an instruction stream depends on a value from a read operation that turns out to trigger a fault of some kind, the vulnerable CPUs speculatively use some value placeholder during transient execution. Such faults may be of architectural (e.g., exceptions) or of microarchitectural nature (e.g., updates of accessed/dirty bits in the page table). Normally, this does not present a problem, because the effects of this calculation will never leave the retire phase. By using side channels like the CPU cache subsystem (our article about Meltdown explains this in detail), this placeholder value can be extracted by an attacker.

The Attack

ZombieLoad enables four different attack scenarios. All of them have in common that they trigger a faulty read, and extract data used by transiently executed operations via a side-channel.

As already stated in the technical background section, operations that depend on the value of a faulty read operation may be executed transiently with wrong data. It is an implementation detail what kind of data is processed during that time window.

It turns out that on Intel processors this wrong data may be data from outside the current process but still loaded by the physical CPU core for whatever reason, which can be:

  • data from kernel space or other applications
  • data from outside the VM: other VM or hypervisor
  • data from inside a currently executing SGX enclave

An important detail is that the attacker has no direct control over what data is read here. Leaked data could be uninteresting because it comes from an irrelevant other process, VM, etc. If it comes from the right process, it might still be the wrong data, because the address from which the data is returned is beyond the attacker's control. (In Meltdown or L1TF, on the other hand, the attacker chooses the address.)

Because of this restriction, the class of attacks that ZombieLoad enables is referred to as data-sampling attacks. The attacker simply samples leaking data that is currently being used by the victim process.

Attack Scenarios

The different attack scenarios are described in the following, accompanied with attacker model and example scenarios.

These attack scenarios can be enhanced in a way that gives the attacker control over the addresses from which data is leaked. In order to achieve that, they can be combined with the use of Spectre-v1 gadgets which lead the CPU into prefetching interesting data from specific addresses. We are going to mention where this can be useful, but will not go into detail as this goes beyond the scope of this article.

Cross-Process User Space Leakage

In this scenario, the attacker executes unprivileged code on one logical CPU core, while a victim application runs on the other logical (but same physical) core.

The victim application could be a browser or password manager application which contains secrets. While the victim application is dealing with interesting data, the attacker triggers faulty read operations in his own thread and samples leaked data from the victim process.

The attacker has no control over the address from which data is leaked, therefore it is necessary to know when the victim application handles the interesting data. For example, if the attacker is looking for AES keys, he can use the fact that shared libraries like OpenSSL are usually used for encryption and decryption. By flushing the sections of the shared AES encryption/decryption routines out of the cache, the attacker thread can sample the access times for such addresses - they would suddenly go down in case the victim process started executing such code again. This is the moment in which the probability of leaking (parts) of the AES keys rises and the data-sampling attack can be started.

Intel SGX Leakage

The victim code that utilizes SGX and the attacker code are assumed to be run on the same physical core, but different logical core.

SGX's typical threat model assumes that enclaves are still secure, even if the attacker has full control over the surrounding operating system.

Under such conditions, ZombieLoad allows for leaking data of running secure enclave code with the same strategy as in the cross-process user space leakage attack scenario.

Virtual Machine Leakage

Similar to the cross-process user space leakage scenario, attacker code and victim code run on the same physical but different logical core. In this scenario, both attacker and victim may run on individual virtual machines.

The attacker might for example upload and run a prepared guest image on a cloud hosting service where the VMs of other customers are co-located in order to leak their data.

Kernel Leakage without Hyperthreading

Even with Hyperthreading disabled, ZombieLoad allows to leak data from other protection domains on the same logical core. If a faulty read leaks data during transient execution, this data may also originate from kernel space. Such an attack would be mounted after transitions between kernel space and user space, e.g., return paths from system calls.

Attacks of this class are much harder to mount because the return paths from such other protection domains to the user space/VM are less likely to leak interesting values. One reason for this are serializing instructions which do not prevent leakage in general, but reduce the amount of leaking memory.

In order to trigger leaks from interesting memory addresses, the attacker could combine the use of Spectre-style gadgets prior to mounting a ZombieLoad attack. Such gadgets may be hard to find on the return paths to user space/VM. They could, however, be deliberately installed in proprietary software in order to provide backdoors, and would be very hard to detect.

Cross-VM Covert Channel

This is not an attack scenario per se, or at least a very different one compared to the previous ones.

Using ZombieLoad as a covert channel, two VMs could communicate with each other even in scenarios where they are configured in a way that forbids direct interaction between them. For example, isolation policies could be in place such that one VM offers unrestricted Internet access (watching YouTube videos) while the other has access to the corporate network, only (reading confidential documents).

Mitigation Techniques

The safest workaround to prevent this extremely powerful attack is running trusted and untrusted applications on different physical machines.

If this is not feasible in given contexts, disabling Hyperthreading completely represents the safest mitigation. This does not, however, close the door on attacks on system call return paths that leak data from kernel space to user space.

In case disabling HT is not feasible for performance or other reasons, trusted and untrusted processes should never be scheduled on the same physical core. This, again, does not mitigate all attacker scenarios, because adversary processes could still leak data from the super ordinated kernel or hypervisor.

For more detailed information about mitigation vectors, please consult the ZombieLoad research paper.

All Comments: [-] | anchor

polskibus(1179) 6 days ago [-]

Can this attack allow the attacker to escape public cloud isolation methods and break into the control plane or other VMs?

readams(4074) 6 days ago [-]

It would have, but it's likely the cloud vendors have already deployed defenses.

robmccoll(10000) 6 days ago [-]

That depends on what you mean by 'break into'. If you mean sample data (read) from the control plane or other VMs, then yes; however, the attacker may have difficulty targeting which data is read. The attacker would not be able to write to that memory or gain any sort of execution privilege using this method alone.

gmueckl(3756) 6 days ago [-]

These CPU flaws make it seem as if virtualization in the data center is becoming really, really dangerous. If these exploits continue to appear, the only way forward would be dedicated machines for each application of each customer. Essentially, this might be killing the cloud by 1000 papercuts because it loses efficiency and cost effectiveness and locally hosted hardware does not necessarily have to have all mitigations applied (no potential of a unknown 3rd party code deployed to the same server).

dooglius(10000) 6 days ago [-]

Well, dedicated machines for each security domain for each customer, a lot of the time it's fine for many applications to be in the same security domain.

Thaxll(10000) 6 days ago [-]

You should use large instance that don't share the same CPU socket, on AWS for example it would be c5.9xlarge and above.

ljlolel(3003) 6 days ago [-]

it increases cloud revenues because of slow downs in CPU, and people can't move off cloud because they're locked in and can't hire datacenter engineers anyway

peterwwillis(2505) 6 days ago [-]

The point of virtualization isn't to add security. It gives you functionality you just cannot have otherwise, and the cloud enables you to scale in a way that is impossible otherwise. If there are security holes, they get patched and the market moves on. It's not just going to abandon either the cloud or virtualization.

dboreham(3669) 6 days ago [-]

There are bare metal 'cloud' providers such as Packet.net where you get the click n deploy convenience of the cloud but a physical machine. They have quite small machines in the inventory that are close to price competitive with VMs. Even Amazon has this bare metal capability FWIW, but afaik only for big expensive machines.

gnode(10000) 6 days ago [-]

> dedicated machines for each application of each customer.

I don't think you don't need to go this far. You can probably get away with circuit switching small blocks of hardware, and fully resetting them between handovers. Although you'd have to ensure sufficient randomisation / granularity to destroy side channels in the switching logic.

NathanKP(2339) 6 days ago [-]

Interesting to note that AWS has been working on their own custom silicon, such as the announced Arm based AWS Graviton powered machines.

We will most likely see a continued divergence between 'consumer silicon' which is designed for speed in a single tenant environment on your local desktop or laptop, and 'cloud silicon' which is optimized to protect virtualization, be power efficient, etc. I'd predict that this will actually lead to increased efficiency and lower prices of cloud resources rather than the 'death by a 1000 cuts' that you are proposing.

Tharkun(3848) 6 days ago [-]

Many years ago, OpenBSD's Theo De Raadt made a sneer at virtualization, saying something the lines of 'they can't even build a secure system, let alone a secure virtualized system'. I can't remember who he was referring to specifically, but we've certainly been seeing a lot of similar vulnerabilities.

dchest(600) 6 days ago [-]

Except most companies don't care about security of the cloud apart from the magical 'compliance' and that they are on the same cloud as everyone else.

mettamage(3738) 5 days ago [-]

I just want to plug their course hardware security (at the VU University Amsterdam). It's an amazing course and it costs 1200 euro's for students who need to pay full price. I've learned a lot about Spectre, Meltdown and novel forms of cache attacks and Rowhammer when I took it.

1Y3(10000) 5 days ago [-]

Offtopic: Are you familiar with the AI departments/ courses (master) at VU? I have the opportunity to go but haven't decided yet. (Interested in human-centred and modern ML with neural networks)

mr_overalls(4000) 6 days ago [-]

At what point do we simply revert to using typewriters for authoring sensitive documents, and pneumatic tubes (couriers for WAN) for networking?


gambler(3909) 6 days ago [-]

We don't need to revert to typewriters. We just need computers designed with a real security model in mind, instead of piles of ah-hoc mitigations. However, I bet no one will invest in it until one of these exploits bring down AWS, take over Google's crawlers or something else of that sort.

criley2(10000) 6 days ago [-]

Long ago? https://www.theguardian.com/world/2013/jul/11/russia-reverts... (also https://www.cia.gov/library/readingroom/document/cia-rdp78-0...)

But assuming a typewriter has no attack vectors is just as foolish as insecure networks IMO.


Also: detecting text through keystrokes previously discussed here https://news.ycombinator.com/item?id=7448976 (https://people.eecs.berkeley.edu/~tygar/papers/Keyboard_Acou...)

Heck while I can't find a quick source, I remember a story about how the CIA designs rooms/walls and buildings to prevent sound from predictably bouncing through rooms in ways that could be captured from afar.

Spooks are usually 10 steps ahead of the public common sense this sense.

bigmattystyles(10000) 5 days ago [-]

Why doesn't this type of news cause INTC to tank - they're up today. I know the market is up today, but (and it's probably my innate overreaction) I would think this sort of news would cause its stock to suffer.

ct520(10000) 5 days ago [-]

It should take a couple of days, also Intel is coming off continuous losses.

neop1x(10000) 5 days ago [-]

Because it depends on customer behavior. Intel has a strong name and people know their CPUs are fast. We see various IT security problems almost daily and most people don't care... It would probably require some massive exploits, data leaks, identity thefts in cloud providers and following lawsuits againts Intel to see some significant stock price change. :)

nodesocket(2185) 5 days ago [-]

I follow the market and tech stocks pretty closely and it is extraditionary rate for breaches, vulnerabilities, or exploits to affect the stock price of companies despite the outrage from the tech community.

mda(4123) 5 days ago [-]

I think there is an expectation that Intel's new generation CPUs wont have these vulnerabilities and they will sell these a lot more to replace the piece of crap they have sold for ridiculous prices. Intel is actually probably happy about these, because no one cares.

nine_k(4098) 6 days ago [-]

In short:

* Core and Xeon CPUs affected, others apparently not.

* HT on or off, any kind of virtualization, and even SGX are penetrable.

* Not OS-specific, apparently.

* Sample code provided.


jolopy(10000) 5 days ago [-]

many Pentiums, Celerons and Atoms are also affected.

waddlesplash(4131) 5 days ago [-]

And here's the mitigation in NetBSD: https://github.com/NetBSD/src/commit/afab82aeafd0c51afc036a8...

Essentially: Intel released a microcode update which makes the `verw` instruction now magically flush MDS-affected buffers. On vulerable CPUs, this instruction now needs to be run on kernel exit; the microcode update won't do it automatically on `sysexit`, unfortunately.

spockz(4102) 6 days ago [-]

According to their blog post[1], there is little you can do against this. Running different applications on different cpus help against them reading each other's data but an rogue process can still read data from the "super ordinated kernel" or hypervisor.

rurban(3289) 6 days ago [-]

Of course you can fix it. I fixed it in Dec 2018 for most such attacks in my safelibc memset_s implementation, but nobody wanted to use it, because securely purging the buffers with secrets via mfence was deemed to slow. So everybody can read your secrets via sidechannel attacks. These tiny MDS buffers need to be purged with verw or l1d_flush followed by an lfence. This needs to be added to memset and memset_s variants. This is much faster. But it will not happen, libc maintainers notoriously don't care, even crypto maintainers not. Only Linux does.


Fej(3839) 5 days ago [-]

What is the recommended course of action? Stop buying Intel products, and devices which contain them?

What about devices with older processors? I'm still running a Sandy Bridge rig and it works fine, except for the side channel vulnerablities. It's probably not going to be patched. I also have a cheaper computer with a Skylake processor, which is newer yet still vulnerable!

It's only a matter of time until something really nasty comes along, making all these PCs dangerous to use. What then? Lawsuits?

My questions are only partially rhetorical.

userbinator(908) 5 days ago [-]

The easiest prevention is to stop running untrusted code, or don't start doing so if you're not already.

The 'elephant in the room' with all these attacks starting from Spectre/Meltdown is that an attacker has to run code on your machine to be able to exploit them at all.

To the average user, the biggest risk of all these side-channels is JS running in the browser, and that is quite effectively prevented by careful whitelisting.

As you can probably tell, I'm really not all that concerned about these sidechannels on my own machines, because I already don't download and run random executables (the consequences of doing that are already worse than this sidechannel would allow...), nor let every site I visit run JS (not even HN, in case you're wondering --- it doesn't need JS to be usable.)

cfallin(10000) 5 days ago [-]

The stream of critical CPU vulnerabilities starting with Spectre/Meltdown last year are related to speculative execution, not just Intel. (AMD and ARM CPUs are also vulnerable to Spectre, for example.) Intel CPUs are sometimes vulnerable to additional attacks because they speculate in more scenarios than other designs. But fundamentally, as long as multiple different trust domains are sharing one CPU that speculates at all, or has any microarchitectural state (e.g., caches), there are likely to be some side-channel attacks that are possible.

The important thing to realize is that speculation and caching and such were invented for performance reasons, and without them, modern computers would be 10x-100x slower. There's a fundamental tradeoff where the CPU could wait for all TLB/permissions checks (increased load latency!), deterministically return data with the same latency for all loads (no caching!), never execute past a branch (no branch prediction!), etc., but it historically has done all these things because the realistic possibility of side-channel attacks never occurred to most microarchitects. Everyone considered designs correct because the architectural result obeyed the restrictions (the final architectural state contained no trace of the bad speculation). Spectre/Meltdown, which leak the speculative information via the cache side-channel, completely blindsided the architecture community; it wasn't just one incompetent company.

The safest bet now for the best security is probably to stick to in-order CPUs (e.g., older ARM SoCs) -- then there's still a side-channel via cache interference, but this is less bad than all the intra-core side channels.

blablabla123(10000) 5 days ago [-]

I think it depends very much how you use your computer and also how much comfort you need. Therefore the answer is very individual. If you want to go totally hardcore secure, you might consider OpenBSD on some obscure architecture like RISC-V or Power - the Talos II workstations are really powerful. (Power was AFAIK originally vulnerable to Spectre or Meltdown, but there are mitigations and it's 100% open source) Probably it's smart to use 2FA on separate hardware (Smartphone, Yubikey or Smartcard for instance) and make it a habit to delete data and apps that you don't need. Oh and also installing only software from trusted sources - whatever that means for you - and an adblocker might also help to prevent malicious JS code. Also for many people iPads serve all needs they have and by default all the native apps have been reviewed.

Probably it's smart to not see a computer not like a walled garden but more like a sieve.

omarforgotpwd(10000) 5 days ago [-]

Vulnerabilities are everywhere, in everything. They just haven't been discovered yet.

bin0(10000) 5 days ago [-]

From what I have heard, these flaws mostly affect Intel because they are the largest CPU manufacturer. They also dominate the datacenter and cloud-compute industries for now, which are by far the highest value targets.

But as an Intel consumer, I am not happy. My understanding is that more stuff can be fixed in microcode, but I suppose a bug could show up which was not practically fixable. If that happened, I would certainly sue or join a class-action lawsuit. Probably the class-action route, because even if I didn't get any thing, I would be just mad enough at Intel to want them to suffer.

Of course, we do have consumer protection agencies; it is possible that they would step if Intel had sold what would effectively be a defective product.

robdachshund(10000) 5 days ago [-]

Yes. Stop supporting this company and their duplicitous practices. After bulldozer flopped they stopped competing and just made 8 years of sandy bridge die shrinks with extra speculative sauce to speed things up.

Their former CEO also committed insider trading by selling off most of his stock before they revealed the vulnerabilities.

I think they are just squeezing people while they can before ARM takes over. AMD is equipped to adapt with their chiplet design. Intel just has an 8 year old ISA that's rooted in the Pentium M from the early 00s. They couldn't even hit 10nm after pushing the launch back for years.

I'm hoping they have to face the music and ARM or RISC V takes over the market with tons of healthy competition from more than just 2 companies.

echopom(10000) 5 days ago [-]

> What is the recommended course of action? Stop buying Intel products, and devices which contain them?

There is absolutely nothing to be done on our level about this.

I'm fairly convinced this is systemic issue that can only be solved by redesigning almost entirely modern cpus and computers architecture.

I can draw a parallel to approximately all Intel cpus which are know to have a dedicated 'mini cpu' called 'Minix' which is an absolute 'black box' and have been found to be vulnerable for to a wide variety of attack for nearly decades...

Not only we need to redesign computers and cpu architecture but we desperately need to make that entire process and knowledge open source , available to all and more transparent.

Today this entire knowledge is the hand of few gigantic corps whom are keeping it to ensure their monopolistic position.

rodgerd(3803) 5 days ago [-]

> Stop buying Intel products, and devices which contain them?

None of that helps with your public cloud workloads.

draw_down(10000) 5 days ago [-]

I don't think anyone is talking about courses of action yet. 'What you should do about it' is really out of scope for announcing a vulnerability; what you should do is often contextual anyway.

Like, what exactly are you hoping to hear? A bunch of our existing processors leak information, and it's a big problem. Big problems don't always have quick, clean solutions. Sorry.

spamizbad(10000) 5 days ago [-]

Step one: switch to in-order cores. Step two: cheekily author a Medium article titled Tomasulo's Algorithm Considered Harmful

fakwandi_priv(10000) 5 days ago [-]

Apparently Intel attempted to play down the issue by trying to award the researchers with the 40,000 dollar tier reward and a separate 80,000 dollar reward as a 'gift' (which the researchers kindly denied) instead of the maximum 100,000 reward for finding a critical vulnerability.

Intel was also planning to wait for at least another 6 months before bringing this to light if it wasn't for the researchers threatening to release the details in May.

Source in the dutch interview: https://www.nrc.nl/nieuws/2019/05/14/hackers-mikken-op-het-i...

close04(3647) 5 days ago [-]

> Intel was also planning to wait for at least another 6 months before bringing this to light

Of course, until the legally agreed date when they can dump shares so there's no obvious proof that it's insider trading. Isn't that what (then) Intel CEO Brian Krzanich did after Meltdown/Spectre?

nullwasamistake(10000) 5 days ago [-]

It's worse than that. Some of these flaws have been known for over a year already. Many vendors with implementation details of the fixes have said 'full mitigation may require disabling hyperthreading'.

Wtf does that mean exactly? Do the patches and microcode work or do they not? I expect the truth to come out as OSS maintainers come out of embargo and others analyze the patches. But it sure looks like VM's on your favorite cloud provider will still be vulnerable in some ways because they're not turning off HT.

Wired has many details of your Dutch link in English. https://www.wired.com/story/intel-mds-attack-speculative-exe...

Intel pressuring vendors to not recommend disabling hyper threading? Apple has added the option to MacOs, so presumably the mitigations are not completely effective: https://www.theregister.co.uk/2019/05/14/intel_hyper_threadi...

mwcmitchell(10000) 5 days ago [-]

That speaks volumes to the integrity of the researchers. Similarly, it speaks to a lack of the same @ Intel. Bribing for silence is not the way to deal with vulnerabilities. I'm glad the researchers are getting some recognition.

easytiger(3711) 5 days ago [-]

You have to admire the complex complicity. Someone smart enough to understand the depths of the problem had to guide that conversation

bijant(10000) 5 days ago [-]

Intel has abused the responsible disclosure process for economic gain. Their Leadership was not interested in a repeat of the spectre and meltdown impact on their stock price and made the (most likely accurate) assessment, that recurring news of intel vulnerabilities would harm their stock more than delay and cumulated release. As a result Academic Researchers were denied some of the credit they would otherwise have rightfully earned, because their individual contributions are buried in a sea of similar publications. Research efforts were thus needlessly duplicated. Research which could have formed the basis for subsequent research was unavailable and (publicly funded) researchers wasted time duplicating results. If two researchers discover the same vulnerabilities independently, there should be no embargo on disclosures because it has to be assumed with a high likelihood that third-parties might already be actively exploiting it. The public has to be warned, even if no effective mitigation is available. If for a subset of the vulnerabilities, AMD and ARM are not affected then security conscious users could have been reducing their exposure by utilizing competitors chips.

In this case the practice of responsible disclosure has been turned on its head. There should no longer be any responsible disclosure with Intel as long as they do not commit to changing their behavior.

guido_vongraum(10000) 5 days ago [-]

People should realize that ancient Chinese were onto something when they told that all phenomena shall evolve only so much before they tip over the peak of maximum development and inevitably rumble downhill into overdevelopment.

P.S. Wow, hit a soft spot. Flagging this for what? For being unloyal to the ideology of everlasting growth? Try again as much as you can.

TACIXAT(4118) 5 days ago [-]

Downvoted for the frothy edit. Chill out there ya toughy. Someone clearly hit a soft spot on you.

S_A_P(4099) 6 days ago [-]

This sentence killed me: 'Daniel Gruss, one of the researchers who discovered the latest round of chip flaws, said it works "just like" it PCs and can read data off the processor. That's potentially a major problem in cloud environments where different customers' virtual machines run on the same server hardware.'

What are they saying here?

dang(172) 6 days ago [-]

This is from the Techcrunch article on the story: https://techcrunch.com/2019/05/14/zombieload-flaw-intel-proc...

We merged that HN thread (https://news.ycombinator.com/item?id=19911465) into this one, via this other one (https://news.ycombinator.com/item?id=19911715).

jcoffland(3900) 6 days ago [-]

It should read:

> ...said it works "just like" in PCs

The number of mistakes in the Techcrunch article is atrocious.

ksec(2188) 6 days ago [-]

Sorry for being naive. Are these kind of CPU Securities vulnerabilities new? Why it is in the past 20 years we have had close to zero in the news ( At least I wasn't aware of any ) and ever since Spectre and Meltdown we have something new like every few months.

And as far as I am aware they are mostly Intel CPU only. Why? And Why not AMD? Something in the Intel design process went wrong? And yet all the Cloud Vendor are still buying Intel and giving very little business to AMD.

robotresearcher(10000) 6 days ago [-]

Since there is relatively little AMD running, you would expect relatively little investment in attacking it.

lawnchair_larry(2703) 6 days ago [-]

This is a common pattern for new bug classes. Nobody thought to look at this, and when they did, the rabbit hole went deep. We likely haven't seen the bottom.

AMD are not better. They're probably worse. They'll be looked at when the Intel tree stops bearing fruit. But finding an Intel bug is higher impact, so that's what researchers want to look at.

simonsays2(10000) 6 days ago [-]

I think it's because they have designed automatic vulnerability detection devices that are efficient at finding even very obscure issues.

nine_k(4098) 6 days ago [-]

Running many instances of various untrusted code on the same server is 'new': it came with the cloud infrastructure.

Running many instances of various untrusted code on the same client machine is 'new': it came with web apps, and with mobile apps.

Before several years ago, it was sort of a non-issue, because to exploit such a vulnerability one would need to write a virus or a trojan, and with this approach, there are many easier ways of privilege escalation.

Something like 'cloud' existed likely on IBM mainframes under OS/VM [1] but System/370-compatible CPUs likely lacked all these exploitable speculative execution features.

[1]: https://en.wikipedia.org/wiki/VM_(operating_system)

cesarb(3320) 5 days ago [-]

> Why it is in the past 20 years we have had close to zero in the news ( At least I wasn't aware of any ) and ever since Spectre and Meltdown we have something new like every few months.

It's a new vulnerability class. Prior to Spectre, nobody thought that code which didn't execute (and couldn't execute) could affect architectural state in an observable way. It's hard to overstate how bizarre the vulnerabilities from the Spectre family are from a software point of view: it's leaking data from code that not only didn't execute yet, but also can never execute, and in some cases doesn't even exist! It's like receiving a packet your future self sent to the past, except that your future self had been dead for two years when he sent the packet, and for some reason he's actually a parrot.

Once a new vulnerability class is discovered, researchers will start looking for new bugs in and around that class. Which is why we have seen lately so many issues disclosed around speculative execution and data leaked through shared microarchitectural state.

clucas(10000) 6 days ago [-]

A lot of reasons - one, we only recently (in academic research time) started using single servers to host services from multiple customers, so the value of these sorts attacks only recently became apparent.

Second, as I understand it, Spectre and Meltdown really started this whole parade because prior to those vulnerabilities, speculative execution attacks were something only academics ever talked about - everyone assumed it would be too difficult to pull off in the real world. When that received wisdom was proved wrong, it probably opened the floodgates for researchers - both in terms of intellectual interest and money.

Also, re: why Intel and not AMD... I think Intel is probably a higher-dollar target due to their dominance in the server market, but also probably because they have been neglecting QC for years... see, e.g., http://danluu.com/cpu-bugs/

wmf(1979) 6 days ago [-]

I think it is definitely worth introspecting about the history. It has been known for over 20 years that sharing pretty much anything creates side channels but nobody knew how to reliably exploit them and it was assumed that side channels might never be exploitable. In recent years there has been massive progress in practical data extraction using side channels.

daeken(1006) 6 days ago [-]

Pandora's box was opened with the public disclosure of Spectre and Meltdown. Security researchers will continue to find new and better ways of attacking the security boundaries in processors, and there's unlikely to be an end to this any time soon. Exciting time to be in security, not such an exciting time to be a potential victim.

api(1209) 5 days ago [-]

It reminds me of when the first buffer overflows were disclosed and they were followed by a massive rash of buffer overflow vulnerabilities that continued for over a decade.

penagwin(4112) 6 days ago [-]

Not arguing just asking, but how has Pandora's box been opened with the discloser of Spectre and Meltdown? We've had security researchers discovering and reporting vulnerabilities since there were computers as far as I know?

I do agree that this won't end soon though. It appears to me that many of the methods CPU's use for better performance are fundamentally flawed in their security, and it's not like we can expect the millions of affected machines to be upgraded to mitigate this.

xondono(10000) 5 days ago [-]

Security by obscurity is no security

dang(172) 6 days ago [-]

Url changed from https://zombieloadattack.com, which points to this.

There is a home page about today's vulnerability disclosures at https://news.ycombinator.com/item?id=19911715. We're disentangling these threads so discussion can focus on what's specific about the two major discoveries. At least I think there are two.

makomk(3600) 4 days ago [-]

I think there's two seperate branded annoucemenst of three or four different vulnerabilities depending on how you count. (There are four CVEs and Intel lists four, but the researchers announced three.) Haven't seen much discussion of the specific differences between them, probably because they're subtle and not terribly relevant to most folks - they all involve one process speculatively reading memory it shouldn't be able to access via the memory access buffers within Intel CPUs, they just vary in which parts of the memory access machinery they use and how exactly they're exploited.

p1necone(10000) 5 days ago [-]

I'm sure I remember a post on here (or possibly /r/programming) a couple of years ago from an Intel employee mentioning that Intel was cutting a lot of QA staff, and that we should expect more bugs in the future. I could be imagining things though.

ahartmetz(10000) 5 days ago [-]

I remember a leak about a call to become 'more agile' like some ARM designers, implying less time spent on verification.

jniedrauer(10000) 5 days ago [-]

What impact does this have in a multi-tenant cloud environment? I'm legitimately considering moving my security critical EC2 instances over to AMD-backed instance types right now.

scandinavian(10000) 5 days ago [-]

I doubt that you both manage critical infrastructure on AWS and haven't read the AWS security bulletin.


INTPenis(10000) 5 days ago [-]

So I'd love to post an Ask HN: Which AMD Laptops would you recommend for work, alternatives to Thinkpads?

I've noticed some Thinkpads with AMD CPUs but I feel like I'm on virgin ground when it comes to AMD and their integrated GPU offerings.

strmpnk(3803) 5 days ago [-]

I've been eyeing more release details on the ThinkPad X395 which was recently announced. 'Coming Soon' is probably means early June for some select configurations. I think these will fit in the premium/professional laptop space better than some of the bargain laptops that carried AMD chips in the past.

I believe others OEMs are developing similar offerings as well but I can't find any quick links for newer SKUs like the Ryzen 7 3700U which offers the improved Zen+ revisions which will specifically improve battery life and heat issues.

akvadrako(3889) 5 days ago [-]

Honor Magicbook looks interesting:


The new 3700U model will probably be available on Aliexpress next month or so. I would consider it except Linux support is unknown and only 8GB of RAM.

vondur(10000) 5 days ago [-]

I've just received in some Thinkpad E485's which have the Ryzen 2500U CPU. They seem pretty nice, with a 1080p screen that's non-glare. The documentation says they support Redhat Enterprise Linux and Ubuntu. So far they seem nice. I was thinking of trying PopOS on it and see how it runs.

Shelnutt2(10000) 5 days ago [-]

If you don't need a dedicated GPU, the APU offers from AMD are great. They have native linux drivers for everything (on the AMD side, double check the nic/touchscreen/touchpad). I'm using an HP envy x360 15z with a AMD Ryzen 2700u running gentoo and love it. The HP envy has a weird keyboard, but it was a good tradeoff for the AMD setup when I bought it last year.

There is a much larger market in 2019 for AMD laptops, so you should be able to find something to suite your needs.

bitL(10000) 5 days ago [-]

You can't really find a good AMD workhorse laptop; all of them end up with some outdated 1080p displays at best :( There is like only one notebook with a decent HiDPI screen with Ryzen (some HP, but not a workhorse). There was also terrible problem with Mobile Ryzen drivers resolved only like a month ago.

shereadsthenews(10000) 6 days ago [-]

I really hate these descriptions of SMT as some kind of violation of the natural relationship between CPU frontend and backend. The idea that there is a "physical core" and a "logical core" does not map to reality.

xyzzyz(4093) 6 days ago [-]

The idea that there is a "physical core" and a "logical core" does not map to reality.

This is the terminology that Intel itself uses in its documentation to describe its products, though. To be fair, they say 'physical processor' and 'logical processor', not 'core'.

userbinator(908) 5 days ago [-]

An unprivileged attacker with the ability to execute code

That sounds like a contradiction --- if you can already execute code, I'd say you're quite privileged. It's unfortunate that their demo doesn't itself run in the browser using JS (I don't know if it's possible), because that's closer to what people might think of as 'unprivileged'.

The attacker has no control over the address from which data is leaked, therefore it is necessary to know when the victim application handles the interesting data.

This is a very important point that all the Spectre/Meltdown-originated side-channels have in common, so I think it deserves more attention: there's a huge difference between being able to read some random data (theoretically, a leak) and it being actionable (practically, to exploit it); of course as mentioned in the article there are certain data which has patterns, but things like encryption keys tend to be pretty much random --- and then there's the question of what exactly that key is protecting. Let's say you did manage to correctly read a whole TLS session key --- what are you going to do with it? How are you going to get access to the network traffic it's protecting? You have just as much chance that this same exploit will leak the bytes of that before it's encrypted, so the ability to do something 'attackful' is still rather limited.

Even the data which has patterns, like the mentioned credit card numbers, still needs some other associated data (cardholder name, PIN, etc.) in order to actually be usable.

The unpredictability of what you get, and the speed at which you can read (the demo shows 31 seconds to read 12 bytes), IMHO leads to a situation where getting all the pieces to line up just right for one specific victim is a huge effort, and because it's timing-based, any small change in the environment could easily 'shift the sand' and result in reading something entirely different from what you had planned with all the careful setup you did.

Using ZombieLoad as a covert channel, two VMs could communicate with each other even in scenarios where they are configured in a way that forbids direct interaction between them.

IMHO that example is stretching things a bit, because it's already possible to 'signal' between VMs by using indicators as crude as CPU or disk usage --- all one VM has to do to 'write' is 'pulse' the CPU or disk usage in whatever pattern it wants, modulating it with the data it wants to send, and the other one can 'read' just by timing how long operations take. Anyone who has ever experienced things like 'this machine is more responsive now, I guess the build I was doing in the background is finished' has seen this simple side-channel in action.

shittyadmin(10000) 5 days ago [-]

> That sounds like a contradiction --- if you can already execute code, I'd say you're quite privileged.

If you're in a VM, you have no privileges over the host CPU, you can't switch to another VM or to the host itself. That's what's meant by unprivileged here.

kbenson(3329) 5 days ago [-]

> if you can already execute code, I'd say you're quite privileged.

I always interpreted 'privileged' to mean 'superuser'. I.e. unrestricted. Or possibly the case of one user and another user. Having a program that can determine the URL you are visiting in the browser from memory when running as the same user is a different class than something that can do the same when run as any non-root user on the system. There's a reason it's common to 'drop privileges' in a daemon after any initial setup that requires those privileges (such as binding to a low port).

jamiek88(10000) 6 days ago [-]

9% hit potentially on performance in data center. Add in all the Spectre and meltdown mitigations and we have potentially lost nearly two generations of Intel performance increases.

Just shows the hoops and tricks needed to keep making, on paper, faster processors year on year but without node shrinks to give headroom.

14nm++++ is played out.

faissaloo(4115) 6 days ago [-]

I wonder at what point the hardware fix for these issues stop becoming worthwhile and if we'll see a resurgence of processors without speculative execution or any of these other speed ups.

zelon88(3932) 5 days ago [-]

So far there seem to be far more of these vulnerabilities in Intel CPUs.

Is that a reflection of engineering differences or a statistical byproduct of the market share of Intel CPUs?

I run AMD not because of the security implications but because I feel every dollar that goes to Intel competition will push Intel and thus the entire industry forward.

repolfx(10000) 4 days ago [-]

Probably both - AMD chips have lower market share because they have lower performance, and they have lower performance (maybe) because they speculate less aggressively. Intel did these optimisations for a reason after all; the market rewards them.

dfrage(10000) 5 days ago [-]

Market share is a good answer, in x86 space alone per https://www.extremetech.com/computing/291032-amd-gains-marke... which I found without putting much effort into it, AMD's share in servers is negligible and even dropped in the last quarter. On the other hand mobile and especially desktop are rising smartly, but still somewhat modest. IoT is excluded, and AMD could be doing well there to the extent anyone's using x86 for that, and there's also (quasi) embedded like network gear.

So the cloud vendors are 97% minimum Intel, they're exquisitely vulnerable both technically and reputationally to these bugs, the stakes are existential for them and they have a lot of money they can throw at the problem, whereas the users of notebooks and desktops are a much more diffuse interest.

As I've mentioned many times in these discussions today, everyone had Spectre issues, and everyone but AMD has Meltdown ones. The more recent vulnerabilities are Intel only because they're using what was learned from those first two to attack Intel specific features like the SGX enclave.

IgorPartola(1380) 5 days ago [-]

So at what point do we start producing CPUs specifically aimed at running a kernel/userland? Why don't we have a CPU architecture where a master core is dedicated to running the kernel and a bunch of other cores run userland programs? I am genuinely curious. I understand that x86 is now the dominant platform in cloud computing. But it's not like virtualization needs to be infinitely nested, right? Why not have the host platform run a single CPU to manage virtual machines, which each get their own core or 20? Would the virtual machines care that they don't have access to all the hardware, just most of it?

mochomocha(4113) 5 days ago [-]

You'd also need to duplicate the whole memory hierarchy of CPU caches to prevent cache attacks against your 'kernel CPU'.

mr_toad(4012) 5 days ago [-]

> Why don't we have a CPU architecture where a master core is dedicated to running the kernel and a bunch of other cores run userland programs?

Sounds a lot like IMB's Cell architecture.

dragontamer(3624) 5 days ago [-]

> Why don't we have a CPU architecture where a master core is dedicated to running the kernel and a bunch of other cores run userland programs?

How will your 'userland core' switch to other userland programs safely? A pointer-dereference can be a MMap'd file, so its actually I/O. This will cause the userland program to enter kernel-mode to interact with the hardware (yes, on code as simple as blah = (this->next)... the -> is a pointer dereference potentially in a mmap'd space to a file).

So right there, you need to switch to kernel mode to complete the file-read (across pointer-dereference). So what, you have a semaphore and linearize it to the kernel?

So now you only have one-core doing all system level functions? That's grossly inefficient. Etc. etc. I don't think your design could work.

founderling(3950) 5 days ago [-]

You cannot give each VM their own core. The business model of the cloud is that multiple VMs with virtual cores run on a single real core.

lallysingh(4001) 5 days ago [-]

The hit is in IPC between the kernel and userland processes. If you really want to pay it, then just go microkernel. You can do that today.

justryry(10000) 5 days ago [-]

Do cloud providers commonly float cores between VMs? I could see instances like the AWS T family (burstable) sharing, but I had always assumed that most instance types don't over-provision CPU.

If that's the case, my CPUs are likely pinned to my VM. I could still have evil userland apps spying on my own VM, but I would not expect this to allow other VMs to spy on mine.

jupp0r(3577) 5 days ago [-]

Sharing CPUs is not the point, as long as you are sharing physical memory with other tenants, you are vulnerable (although exploits are much harder when attackers have to cross privilege boundaries).

bayindirh(10000) 5 days ago [-]

I don't think many cloud providers explicitly pin the VMs to the cores even if they don't over provision the servers.

guido_vongraum(10000) 5 days ago [-]

People should realize that ancient Chinese were оnto something when they told that all phenomena shall evolve only so much before they tip over the peak of maximum development and inevitably rumble downhill into overdevelopment.

P.S. the Holy Church of Progress keeps flagging the herecy of I-Ching out of existence, may it prevail in its glorious ways. Curious fact: expressing your disagreement in written form takes more neurons than flagging reflex does. Try and ye shall succeed!

dang(172) 5 days ago [-]

I like the I Ching too but could you please stop posting these and then deleting them? It's an abuse of the site.

dschuetz(3209) 6 days ago [-]

It takes one rouge/unpatched VM to run and scan threads randomly, undetected over a longer period of time, if not patched. With HT disabled potential hits become less likely, but still possible given time. Is virtualization on Intel dead now? Perhaps not. But, it's increasingly dangerous to use Intel for cloud services.

nnx(1280) 5 days ago [-]

Interestingly AWS released a bulletin about MDS vulnerability but nothing about ZombieLoad yet. https://aws.amazon.com/security/security-bulletins/AWS-2019-...

mda(4123) 6 days ago [-]

Looks like AMD Cpus are safe again.

harryh(2252) 6 days ago [-]

Note that Spectre definitely affected AMD chips and in general these sorts of side channel attacks based on speculative execution are extremely likely to be effective against any chip (including AMD manufactured ones) that employ speculative execution though the precise implementation might have to be jiggered a bit.

yalok(10000) 5 days ago [-]

> macOS performance: Testing conducted by Apple in May 2019 showed as much as a 40% reduction in performance with tests that include multithreaded workloads and public benchmarks. Performance tests are conducted using specific Mac computers. Actual results will vary based on model, configuration, usage, and other factors.

from here: https://support.apple.com/en-us/HT210107

ynnn(10000) 5 days ago [-]

Yeah, if you choose to turn off hyperthreading. Pretty expected tbh - hyperthreading helps quite a bit for some things.

Historical Discussions: I turned my interview task for Google into a startup (May 16, 2019: 744 points)

(745) I turned my interview task for Google into a startup

745 points 4 days ago by mmoez in 3228th position

uxdesign.cc | Estimated reading time – 8 minutes | comments | anchor

Over a year ago I was lucky enough to bag an interview at Google for the role of Visual Designer. After getting through several tough rounds I was faced with the notorious Google interview task. Unfortunately for me, I didn't get the job. Fortunately for me, all was not lost as I fell in love with the task so much I co-founded a fitness tech startup on the back of that idea.

Here's the story of how we built Tona, launched today on Product Hunt.

Interview task on the left www.tona.fit on the right

The dreaded interview task

The bane of all product designer's lives is the dreaded interview task. We all know the drill: You have '4–6 hours' to design a slick product, with a memorable brand and cohesive working method. No one acknowledges the fact that in reality, you're about to dedicate up to 5 working days on this task, with the potential for them to ghost you straight afterwards.

When I interviewed at Google for the role of Visual Designer this was no exception. After two rounds of interviews later I was slapped with the infamous task. They presented three design challenge options to pick from, with a weeks notice and advised that we should spend no more that 3–5 hours on the task (wink, wink) and provide the following deliverables:

  • A low-fidelity overview of your proposed UX (wireframes or sketches are fine)
  • A high-fidelity mockup for one widget or interaction
  • Any materials you used to arrive at final results (paper sketches, explorations, etc.)

To be fair to Google, they did stipulate they valued the work process and idea generation above pixel perfection. But, as a designer, I can't help but fuss over the details. I know that great visuals capture attention and that every extra minute of work can help you get the edge over a rival candidate. And so, the late night work began...

God is in the detail (interview task mockups)

The task

One of the options was to design a Fitness Class Leaderboard, as a regular gym goer, I decided to go for it. I figured it was a chance to flex my design skills in UI and UX, with a little fun thrown in. Fit-tech had already piqued my interest, as with my friends (and future co-founders) I had spent many evenings nursing drinks and musing ideas for a gym-class booking app.

In order to cut down the production time, I reused assets I'd already doodled during the class-booking app ideation. Then I duplicated the layout and web page presentation format that I'd used for previous interviews (check out my previous work for Shazam and for Hudl).

The holiday balcony workspace. Intentionally ruffled by design...

I always strive to be as efficient as possible by reusing components and assets, but the dreaded interview task still got to me. I racked up 30 hours of work trying to make Google love me. All on a week in which I was also on holiday... in the South of France, I was working until 3am on a balmy balcony in the town of Nice (not the worst thing in the world, but being on the beach or in the buzzing bars would have been a hell of a lot better).

As I really wanted the job, I put my all in. You can check out my final interview submission here.

Interview task on the left and Tona as it is now. Although the brand has visibly improved, the core facets of the product are still very similar

From the interview task to a business idea

Unfortunately for me, I didn't get the job. I was told it was extremely close and that I should reapply in 12 months, but it seems my ideas weren't out there enough to excite them... (so try and shock them, aspiring Googlers!).

This pragmatic approach was intentional however, I was already thinking ahead to how this product could sell to my local gym, or how I would use it next time I hit a class.

Whilst researching, I found there was no true platform for gym leaderboards. Ones that record and display metrics no-matter your gym. What I had in mind would record everything, from reps to distance, load to calories, seconds to rounds, max heart rates to one rep maxes. In your gym, or at home, and compare your results with friends, gym buddies and the world.

My mind instantly went to an app I love, Strava. I use it almost daily to record my runs to share and compete with friends. But, I also enjoy strength and HIIT training in the gym. With the gym leaderboard brief, I realised there was an untapped niche to create a social fitness platform for all gym based activity. Recording and sharing reps and sets, not just your distance and pace.


Fast forward to this week in May and we have finally released our product Tona check it out on Product Hunt today and upvote it if you like it! It has been one heck of a journey to get to this point. With plenty of ups, downs and mistakes along the way, of which I'm sure I could write plenty more on...

But looking back, one of the most interesting things is how similar my initial design for the Google Interview task was to where we landed today.

Explore sections of interview task (left) and live App Store version (right)
Filters in the interview task (left) vs live App Store version (right)

Where we are today

We have over 5,000 users planning, recording and sharing workouts from boxing to yoga, CrossFit to HIIT from Mexico to Japan, London to Los Angeles. In fact, our BETA users have burned two million calories collectively and lifted 500,000 tonnes. All of which are displayed in over a million unique leaderboards filterable by sex, age, weight and location.

Our gym solution is being trialled by 5 studios in our home town of London and we have international expansion in mind this year (if you are a gym owner, reach out to us at [email protected]!).

Try my 'interview task' for yourself

Fortunately, this interview task had a life after completion. Unfortunately, for the most part, those weeks of work never do.

I hope this inspires you to treat future interview tasks, no matter your industry, as an opportunity to explore ideas beyond the first paycheck. You never know, there might be a startup in it after all...

The Tona app is free to use and available to download for iPhone, Apple Watch and pre-release on Android today. We also launched on Product Hunt today so would love your feedback there if you have a spare minute.

All Comments: [-] | anchor

wpietri(3437) 4 days ago [-]

Is this common?

> I was told it was extremely close and that I should reapply in 12 months,

It seems really weird to me to put this burden on the applicant. If someone is above the hiring bar but just got edged out, shouldn't they go into the bucket of people to contact as soon as there's an opening? And for designers at a place the size of Google, wouldn't that be much sooner than 12 months?

allenu(10000) 4 days ago [-]

I interviewed at Google and was told something similar. However, in their case they did reach out to me about 12 months later asking if I wanted to reapply. This was for a dev role though.

nerdjon(10000) 4 days ago [-]

Some companies do technical interview great, but there are a couple things that I have noticed that always make me question if I actually want to work somewhere:

- Being sent to a website to do a take home coding exercise, with a checkbox basically saying 'I won't use the internet, stack overflow, or similar'. I get that you need to test my knowledge, but shouldn't you care more that I know how to search for my problem and understand a fix instead of flailing around?

- (this one I have noticed in person) Being handed a thing to build, being told I have a limited amount of time. That all they want is a proof of concept, but the tooling they are asking me to use go completely against the idea that it is a 'proof of concept' since they make certain assumptions about the end goal. EX: deploy something with terraform but we don't want you to build an AMI or anything like that so just manually go into each system once you create it

nmcfarl(2921) 4 days ago [-]

Once I had a half day exercise that went even further - they provided the computer, removing the internet, but also removing all of the normal module documentation, and for good measure forbidding you from using any documentation from any source.

There were a lot of compilererrors on the way to a solution for that one.

jacurtis(10000) 4 days ago [-]

I hate companies that act like it is a sin to use Google or Stack Overflow when coding.

When I hear this BS, then I know it is clear that they have never coded themselves. I would love to meet a software engineer who truly can sit in a padded room and write code with no internet access to use for help or reference.

Especially those of us that juggle multiple languages (full stack). Sometimes I know I need to run a method but I can't remember the actual method name or how it is formatted in this language. I know it exists, I have used many times before, but I might not remember the exact way it is worded. So I have to look it up. It adds <60 seconds for me to find.

Or I might know a method exists but not sure which parameters it accepts. I might usually use the first 2 params and leave the rest, but this specific use case requires changing something i don't normally need. So guess what, I have to look it up!

Sometimes I might get an error message that I don't recognize, so a quick look at StackOverflow reveals that a configuration item is off, or something was formatted incorrectly. I often can understand my problem with only just reading the first few lines of the selected answer on SO. I might be able to solve a bug in 60 seconds by using Stack Overflow, that might otherwise take hours to do without them.

Especially as software developers keep getting asked to know more and more stuff, we have to rely on documentation and helpful sites in order to do our jobs. It is crazy to expect that you don't use those resources.

Oh and ObjectiveC/Swift developers have it really bad. Some of the method names are entire structured sentences.

sethammons(3979) 4 days ago [-]

The rule of thumb I subscribed to: have the interviewer(s) do the interview. Give the candidate 3x time to do it that the in-house people took. Or, framed in reverse, if you want the assignment to take and n minutes or hours, give yourself n/3 to complete it, and however far you get is the bar for the assignment. Interviewees are stressed, maybe outside their domain, and other things. Make something that they can show you their value. The harder and correct thing to do is to find how they can help your org. It is easy to find what they don't know or to play stump the idiot.

tatami(10000) 4 days ago [-]

My high school physics teacher did the same. He would take the tests he prepared himself to check whether the scope was appropriate for the allocated time. The factor of 3 was spot on for this kind of estimation.

dnaismith(10000) 4 days ago [-]

We did this at my company. A developer at the level that that developer was applying for would do a task that took them 2-4 hours. We then gave the interviewee four days to complete the task with a recommended time of 2 days. We also always made sure to include the weekend in the time they had to complete the task as we know during the week time is precious.

cecilpl2(10000) 4 days ago [-]

I recently was given a toy problem in an interview - read data in some format, process it, and write the output. Expectation was 4 hours of work, which is pretty close to what it took me to get all the edge cases, test suite, requisite documentation, and VS project/solution files.

The feedback I got afterwards was 'We're passing on you as your code doesn't compile/run on Linux', a requirement that wasn't specified.

0xffff2(10000) 4 days ago [-]

>The feedback I got afterwards was 'We're passing on you as your code doesn't compile/run on Linux', a requirement that wasn't specified.

I have to ask, was there something that implied that it should specifically run on Windows? If I got a software spec that didn't spell out exactly which platforms needed to be supported, one of the very first things I would do would be to ask for clarification.

gregd(3627) 4 days ago [-]

At least you got a response :)

But on a serious note, I really don't understand why they couldn't have written you back and asked if you could make it compile on Linux and send it back.

thoreauway(10000) 4 days ago [-]

I interviewed at Uber before they launched. Their 'take home' was to create... Uber.

Here's the exact challenge:

UberCab Coder Challenge

1) Write a program that determines the wait time, trip time, rating, and fare for a black car trip in San Francisco, given that the customer's pickup location and destination is randomly placed anywhere in San Francisco proper, and given that there are X number of cars all placed randomly across the city. Use GoogleMaps API for trip duration time. Run simulation 100 times for 1, 5, 10, 20, and 50 cars. Output average wait times and trip times for each # of cars the simulation was run on.

2) Take #1 and do the same pickup and trip time calculation given that there are Y customer requests per hour (extra credit if this is done with a Poisson distribution ;)). Take into account each car's availability (given that they may have been selected to carry a customer and are in transit on pickup or on actual trip, and thus unavailable). Run simulation given the number of cars in #1, but for each simulation in #1, run once for Y=1x, 2x, 5x, the number of requests per hour (X being the number of cars in the city, as defined in #1)

3) Take #2 and then make a GoogleMaps mashup that then shows the various trips that were taken for each simulation. Provide mashup interface showing all trips but also provide ability to only show trips for each driver.

4) Write a 'Hello World' app on the iPhone that for a certain driver logging in shows all of his trips taken in the last simulation, the average rating that he got, and the total fares he collected during those trips.

astura(10000) 4 days ago [-]

>Write a 'Hello World' app on the iPhone that for a certain driver logging in shows all of his trips taken in the last simulation, the average rating that he got, and the total fares he collected during those trips.

That's not 'Hello World' app, a 'Hello World' app prints 'Hello World.' The purpose of a 'Hello World' app is to demonstrate you've setup the environment correctly, nothing more.

Words and terms have meaning.

aswanson(3693) 4 days ago [-]

Wow. That's just abusive...

wingerlang(4065) 4 days ago [-]

I got a take home assignment from a large company a few months ago, it was the whole shebang as well. Build an iOS application with full test suite, several screens, should work with their test-api, should be polished, offline/online work etc.

I estimated it to take at least 2 weeks. Sent it to a colleague and he estimated it the same. In reality you had 48 hours to turn it back in. I gave up halfway into the project.

shanxS(4009) 4 days ago [-]

How much time did you have and which level of engineer (senior/architect etc.) were you applying to?

monocasa(2761) 4 days ago [-]

Well, that certainly selects for engineers willing to do unreasonable amounts of work without questioning it.

nan0(4122) 4 days ago [-]

How long did Uber give you to do this?

throwaway373674(10000) 4 days ago [-]

I've been a software engineer at Uber since 2015 and have lead over a hundred onsite interviews and countless phone screens. In four years I have never seen a take-home exercise be used. Every other interviewer also holds a strong negative opinion of them.

It's possible we were doing this many years ago, but that certainly isn't the case now.

PerfectElement(4126) 4 days ago [-]

That's a great way of getting an MVP done for free.

busterarm(10000) 4 days ago [-]

I remember my Uber interview.

My interviewer was sitting in a room full of people when they called me, asked me questions and then proceeded to have conversations with the other people in the room without listening to me and would ask me to repeat my answers. And do it again.

15 minutes of this and I hung up on them. This was about 4 years ago and the equity that I have in the company I joined instead is on an upward trajectory instead of what Uber just did. Phew.

zoomablemind(10000) 4 days ago [-]

Was it an interview for lead position or just coder? How did this end?

It looks like a good chance to outline the resource requirements for accomplishing this. No way should this be a work of team-of-one.

State the expected stages and some ideas for possible architecture and frameworks.

Then ask for resources needed to implement this with some rough-optimistic estimates, including the budget.

That should be enough for a serious conversation to begin with interested party.

gvand(3955) 4 days ago [-]

I wouldn't have expected anything less from them, go Uber! Always the worst!

gregd(3627) 4 days ago [-]

Why are people putting up with this?

distances(10000) 4 days ago [-]

Having never worked in SV, are these home assignments common there? I know several engineers who would refuse to do any homework at all as a matter of principle, so a task of this magnitude without compensation sounds laughable on the face of it.

andrei_says_(4127) 4 days ago [-]

A company which has no problem pushing beyond any conceivably reasonable bounds of exploitation.

A vision to push anyone who works with/for them to the maximum possible win/lose ratio. Including drivers and riders.

watfly(10000) 4 days ago [-]

Did you complete this? To me this sounds like they were not looking to hire but would hire someone exceptional who goes through this task and does better than their own team did.

ravenstine(10000) 4 days ago [-]

None of the places I've worked at had me do a coding challenge. Either they already knew who I was through networking or they asked me to show them a personal project and explain how it worked. Each job was a great opportunity.

When I start looking for a new opportunity in a year or two, I may consider coding challenges to be a soft red flag in that I don't think they're worth wasting time over. I spent so many hours in the last 6 months doing various code challenges, some of which were borderline unreasonable, none of which landed me a position. I ended up getting hired by one of the top 3 places I wanted to work, and it was mainly because I had already networked with my interviewers. Maybe they looked at my GitHub, but I never asked. Regardless, coding challenges of any kind have consistently been a waste of time for me.

Unless I really really want to work for a particular company, from now on I will immediately end my interview process if they ask me to do the following:

- On-site coding challenges

- Take home coding challenges

- Whiteboarding

- Brain teasers

I don't have time to waste on that shit anymore. If having years of experience and a GitHub don't demonstrate to you that I can write code, then I really can't help you with whatever it is you're looking to achieve. Developers have to shotgun apply to many places in order to play the numbers game of getting hired, and life is too short to work an extra hour or more every night to code something no one will ever use that won't get me hired anyway. People are generally biased towards feelings rather than facts, and any of the above bullet points mostly serve as a Rorschach test for interviewers who have already decided whether they like me or not. Someone just breaking into the industry might want to go through the hazing process of hammering out coding challenges, but I refuse to have my personal time wasted.

To anyone reading this, networking is a biggie. My dad used to tell me that and I never took him seriously because I hated the idea that schmoozing outperforms merit, but it's absolutely true. In reality, networking should be better because, at the end of the day, most people can tell whether you're competent and if they want to work with you based on how you naturally interact with them. Formal interviews barely work because everyone rehearses for them and they only prove that you were able to tackle one problem(which recruiters often alert candidates to anyway).

bubbleRefuge(10000) 4 days ago [-]

Amen brother. Networking is absolutely the way to go. As employees or grunts and not a part of the ownership/capitalist class, our network is our biggest asset beyond our skills. Some suggestions: 1) Try to socialize with fellow engineers at work. Coffee, lunch, grab a beer. Host or attend a dinner party occasionally. Put yourself out there.

2) Don't work at any single employer for more than 3 years unless you have very compelling reasons. Stock options or you have excellent opportunities staying put. Changing employers allows you to grow your network and your skill and experience.

3) Attend meetups and networking events.

monksy(3802) 4 days ago [-]

Whiteboard interviews can work well if they're not going to try to pick you apart over extremely complex topics and overly high expectations.

Interviews are good about getting a general feeling about a person.

sonnyblarney(3342) 4 days ago [-]

This is ridiculous.

There's nothing wrong with on-site coding challenges, white boarding, and a good chunk of brain teasers.

Having people code a small problem might be by far the best way to understand how they can do the job.

'If having years of experience and a GitHub don't demonstrate to you that I can write code, '

Your Github code does not demonstrate what people are looking for.

JudgeWapner(10000) 4 days ago [-]

> I hated the idea that schmoozing outperforms merit, but it's absolutely true

Try thinking about it quantitatively. Someone who spends the time to schmooze (read: invest time in developing a human relationship) has essentially posted that time as collateral. They added chips to the figurative pot, and if they were a fraud or recklessly destroyed that relationship, they forfeit that collateral (lose the relationship, lose the honor and respect they earned). So that's why it's convincing to invest in people who are willing to post collateral. Now certainly, there are people who have no shame and will pull cons. So this method isn't infallible. But they can only get so far before they get exposed, especially in our industry.

brianpgordon(3982) 4 days ago [-]

Refusing take-homes is a reasonable stance but refusing to whiteboard strikes me as overly critical. It's a pretty standard thing to do in an interview... while whiteboarding may be a little silly, walking out of an interview after someone's already blocked time for you because you think whiteboarding is silly is even more silly than whiteboarding is. If I were the interviewer I'd be concerned that you can't be even a little bit flexible with things that you personally don't like.

Zenbit_UX(10000) 4 days ago [-]

I recently went through an interview with a company posting on HN and was shocked by their challenge.

They provided their current UI (barebones dashboard) as well as instructions to launch the backend server locally and requested I upgrade their UI to include a side panel for filtering a data table.

The task itself wasn't difficult, but the realization that these scum bags wanted me to upgrade their current product under the guise of a challenge. I instantly knew this wouldn't be a company I'd want to work for even if they did provide an offer.

hdfbdtbcdg(10000) 4 days ago [-]

Are you sure it was the latest version of their product? Asking someone to do a task that you have recently done could be a really good code test.

llamataboot(1267) 3 days ago [-]

I love take-home assignments. Especially reasonably scoped ones. They allow me to show off my skills. However, my opinion is that it should be paid labor. Give me a day of work and pay me to do it. This values my time, it actually doesn't add all the much more to the hiring expense (all that engineer interview time, resume review time, etc is expensive!), and it makes sure you are screening out early and often and not wasting 20 people's time doing a bunch of take home work for one dev position.

jhayward(4118) 3 days ago [-]

> my opinion is that it should be paid labor.

Every job I've ever had required me to get advance permission to do paid outside work, so such a rule would eliminate most people who are both ethical and employed.

dmoy(10000) 3 days ago [-]

I don't like takehome assignments because my current employer's moonlighting policies mean I can't do them.

Ideally that wouldn't be a legal policy for my company to enforce, but that's the reality for me right now.

isless(10000) 3 days ago [-]

In an ideal world, the task can be something that the company needed done anyways, further motivating the financial insensitive. Your point about general cost of the hiring process is well taken, but I imagine that companies issuing take-homes are giving applicants the same task, and don't see a point in paying for the same job to be done.

p0nce(3930) 4 days ago [-]

Easier to create a software company than work for one.

leesec(10000) 4 days ago [-]


TallGuyShort(3081) 4 days ago [-]

My last company did interviews like this - I was always disgusted at the way most of the team would sit around and just find things to pick on in the person's code. Dumb stuff too. It became an excuse for a lot of the engineers to just make fun of someone who had successfully completed the program, but didn't take the time to JavaDoc everything and follow the same code style best practices we did. Things you can easily address with a bit of training and code review, and that maybe they already do when they're not trying to interview for a lot of jobs while still in school. Sad, really.

jldugger(4092) 4 days ago [-]

The way to handle this is to _write down your grading criteria ahead of time_. If it's not important enough to be front of mind when designing the challenge, it's not important enough to bring up when evaluating the output of a candidate you've really liked so far.

rickyc091(4127) 4 days ago [-]

Not saying this approach was correct, but given a choice, what type of interview would you prefer? Hackerrank? In person, programming challenges, smaller take-home assignment, other?

hising(2482) 4 days ago [-]

Awesome to read about you turning this into your own startup.

One thing I came to think about regarding the '4-5 hours' and throwing 5 working days on it, is that, maybe as an interviewer it is quite obvious that you put more than 4-5 hours into it and what they were 'really' after were the trade-offs you would have made if you put 4-5 hours into it. Just a thought.

allenu(10000) 4 days ago [-]

Yeah, it may translate into, 'well, if I work with this person, are they going to spend way too much on details that I don't need and burn themselves out just trying to please me?' Part of working is being able to be assertive and say 'ok, for only 4-5 hours, here's what I was able to do for you, so now let's figure out next steps'.

dannyw(3838) 4 days ago [-]

Not Google, but similar company and can confirm. If you spend 5x the amount of time we suggest, it's obvious and the expectations scale a lot, lot higher than if you just spent 5 hours.

avip(4090) 4 days ago [-]

This makes no sense for 'product design'. Just thinking about it, looking at existing similar products, before you even grab your napkin, takes 4 hours.

albertgoeswoof(4059) 4 days ago [-]

This is a really great marketing example! The creator has combined a top page HN post with a PH launch at the same time with really polished looking apps & landing pages.

Essentially this will give them a huge traction boost on the app stores and a bunch of new users. Super impressive.

allenu(10000) 4 days ago [-]

It's definitely good marketing, so kudos to the creator. I will say though that so many Medium articles I read now are essentially marketing pieces. They'll cover some topic very, very lightly and have a call to action at the end to view the author's product or company.

mmoez(3228) 4 days ago [-]

Link poster here: I am not the creator of the app and don't know its author Andrew Burton. So there is no intentional coordination. Heck, Andrew may not have yet noticed that his article is a top page HN.

I have just stumbled upon this article in a UX mailing list and thought it'd be interesting to share it on HN.

Loughla(10000) 4 days ago [-]

Honestly, this type of advertising usually rubs me exactly the wrong direction.

But after reading the discussion it prompted, I'm left questioning exactly why it does so.

And like you said, slick marketing on the part of the creator at the very least needs to be acknowledged.

maaaats(2892) 4 days ago [-]

I like the concept. I use Strava a lot for cycling, but it's not very good for non-cardio activities. Lots of people I follow also track their strength workouts on Strava, but I haven't done it myself as I find it hard to track progress when the system is not made for it.

Before I have used Fitocracy, I'd say this may be the biggest competitor to this, as it is (at least 5 years ago when I used it) more tailored for strength, where one can log specific exercises and track them over time.

siscia(3238) 4 days ago [-]

I believe Google have brainwashed us all.

Are we sure that it is worth all this effort? Are they so much better than any other company or startup?

Is our own self so much more valuable if a random intervier liked our works an epsilon more than the other candidate?

People inside FAANG are unhappy as well, they leave their company as well.

I believe we should be less impacted by the ads that Google is running to sponsor itself as the best workplace ever.

dannyw(3838) 4 days ago [-]

Their pay and RSUs are pretty good...

Apocryphon(2710) 4 days ago [-]

I thought for the past five years FAANG, maybe Netflix aside, was supposed to be passé. Unicorns like AirBnB or Stripe were supposed to be the killer workplaces to strive for. Though I suppose with the potential IPO busts this year (see Uber) the dream aspirant tech workplace might be changing, too.

allenu(10000) 4 days ago [-]

I agree with you. I was immediately struck by the tone of the article that the author was 'fortunate' enough to be interviewed by Google. Maybe it's because I've been working for nearly 20 years now and am becoming an 'old man', but the feeling that Google is this promised land of opportunity rubs me the wrong way. Every company has pros and cons and I suppose Google may have a lot of pros (free food, good perks, and good pay) but at the end of the day, it's just a big company where you work on large projects and will likely have little say in the final product's look and feel.

VBprogrammer(3249) 4 days ago [-]

Urgh. I recently had a similar situation. I started interviewing with a company who have a take-home assignment which I could have completed in the required time without taking any care over the quality of code I was submitting, but to do it properly I spent a bit of time thinking about it and building a reasonably extensive test-suite along the way.

I was asked to come in for a face to face interview, for which I made clear I was having to take a day off of work, which was cancelled a few working hours before (so I wasn't able to cancel the holiday) without an apology. I was then interviewed again by phone and told that there was 'a bug' in my code and that I should try to find it and resubmit it without any other details. The spec was a puzzle with lots of weird edge cases and horrible inconsistencies.

I decided at that point that I didn't want to work with them.

kemiller2002(10000) 4 days ago [-]

I had one a little while ago where I spent a fair amount of time on the request, and I thought was pretty well done. It completed all their tasks, showed how to use their api, and it showed I know how to integrate several different technologies, from front things like React to building an API to interact with theirs and setting up and configuring cloud technologies. They came back, and said, 'We don't want to continue, we expected more from him.' I sat there baffled and so did the recruiter, as it clearly did everything and more than they asked for in the assignment.

I am so glad they didn't want to go further as I am sure they are a nightmare to work for.

beyondcompute(4076) 3 days ago [-]

Why is Googleʼs design so bad then if they have such a tough interview process? Corporate structure/culture kills even best-of-the best talentʼs initiative? I think, what they need is non-conformism. But they donʼt select presumably for non-conformism during interview. They select for obedience. (Just thinking out loud) When Google (having so much resources) will release a product whose UI would make people say, "wow?" Not just the (superior) data-processing capabilities or pretty good hardware.

killerdhmo(3959) 3 days ago [-]

Clearly this is subjective, but I'm regularly impressed with Photos and Maps. Thoughtful, subtle, and yet informative. With some delightful interactions.

omarchowdhury(2428) 3 days ago [-]

Google products are optimized for action, not impressing Dribbble passerbys. Although, a 'wow' UI doesn't have to be mutually exclusive with the goal of completing actions, but the who's to say others aren't saying 'wow', even if you aren't?

JacKTrocinskI(4076) 4 days ago [-]

Someone I once worked with asked me a question that I think is relevant here and I will never forget it (roughly translated to English): Do you work to live, or live to work? It's beautiful, simple, and for me it put into perspective what really matters. I value and enjoy my job but I refuse to dedicate my life to it.

jacurtis(10000) 4 days ago [-]

That is a great interview question. I would happily answer

'I work to live, and any company that doesn't want to hire me because of that answer is a company that I don't want to work for'

marapuru(10000) 4 days ago [-]

Interesting read. Its good to realize that a interview process and the work involved can turn into your own business.

Shows again that good efforts pay off anyhow.

jmkd(10000) 4 days ago [-]

Equally revealing on how submitted interview tasks can have exploitable business value.

jordansmithnz(3085) 4 days ago [-]

This reminds me of a take home software engineering interview I was once given via email. Same deal, I was told about 5 hours. I'm an iOS developer, so I was expecting a pretty simple app.

I opened the PDF to find not one, but three separate tasks. Completion of all three was expected, with an estimate of about two hours each. One of the tasks was to replicate Apple's 'Reminders' app in its entirety, backend sync functionality included. Another, a task requesting Visual Studio (iOS devs have no need for any experience with this).

I promptly replied declining to continue the interview process. If you're ever in a similar situation, interviews can sometimes tell you more about the company than they can learn about you. Good chance I dodged a bullet, and could have been working for someone setting highly unrealistic client deadlines, with the expectation that I can build something in any technology proficiently.

amondal(10000) 4 days ago [-]

This happened to me too. A company reached out to me on Angel with their coding 'challenge' - create a facebook timeline clone with an API, enzyme + selenium tests, documentation, and deploy everything to AWS. fortunately i have enough experience to say no to this kind of thing, but I do feel that some more junior folks are being exploited into thinking this sort of project is normal for an interview

accnumnplus1(10000) 4 days ago [-]

I once received one of these tasks, described as requiring four hours. I had a look around the web, found the source of the exercise, complete with an expected time to complete of four hours, and the guy had completed all of two classes before abandoning it.

briandear(1954) 4 days ago [-]

Recreate Reminders? Wow. That app has an entire team of engineers to build that. It isn't trivial, despite being "simple." And to do it in two hours? That's just insulting.

beersigns(10000) 4 days ago [-]

I had a similar experience once but a little more extreme. On a take home interview test they were asking to build a full backend analytics setup plus a data viz UI. I called the recruiter back and told them an estimate to a problem like this was well beyond the ~4 hours they originally told me. Their response was to say 'they were looking for people who would find a way to get it done no matter what'. I immediately stopped the interview process without looking back.

duxup(3882) 4 days ago [-]

A while back I made a career change and I was looking to break into the land of programming, where there are a lot of bootcamp noobs like me trying to do the same.

I did an interview and one guy just didn't want to be doing interviews it seemed. Later when I was asking questions it became clear that he was a 'senior dev' who just didn't want to talk / work with anyone he deemed less knowledgeable or just not capable or something. I also find out he's the lead for the spot they're hiring... bad feelings started for me.

Later I got some positive feedback that the take home assignment I completed was one of the only fully completed and 'thoughtfully done' assignments they received (one of the only times I received useful feedback during a recruiting experience).

Bad vibes aside, I was a noob and beggars can't be choosers so I was surprised when they asked for a second interview and felt I had to go to the interview (need a job!). Second interview and it was the same thing, and when I asked questions he didn't even answer them really / his random technical statements seemed like sort of ultra truisms / not related to the actual problem we were working. It also seemed this dude's team kinda worked on their own island (kinda appealing) and he was the guy running the island evaluating people (very much not appealing). More bad vibes....

It was a big corporate place (good pay, benefits I had heard) so that meant, MORE interviews if we were going to move forward..

But by that time I had a good interview at a small place (less benefits, probabbly less pay, fewer people to learn from / with, long commute, but it seemed friendlier and the lay of the land was way more clear)... I decided that I just had too bad a vibe from the guy who would be my boss so I declined the interview.

I was pretty honest with the HR person that just from the interview this guy really seemed like he didn't want to hire me / didn't really want to work with anyone like me and if they were going to hire someone they might want to work on that. The HR person said 'yeah we know'.

I still wonder what that job would have been like, would have been really nice to work at that place...but that guy... you just get that sense in an interview sometimes.

minimaxir(99) 4 days ago [-]

In a data science interview at a respected company I received a take-home with the framing "A Product Manager wants a dashboard with a massive amount of features (including interactivity, model prediction and GIS)" and received only 48 hours to do it. I thought there was some weird trick because that assignment had an unreasonable amount of scope in that timeframe. I found out the company uses a BI tool to streamline such tasks that is only available to enterprises and not consumers.

I eventually made the dashboard but it took 16 man hours; on the on-site, the interviewers implied they didn't like it as it was not feature complete. (I called them out about the BI tool; they weren't happy about it but admitted I was correct that it would be more efficient)

Now that I have had more experience as a data scientist, the real-world response to such a framing is to push back against the PM and write an implementation spec with a defined scope.

legohead(4122) 4 days ago [-]

> They presented three design challenge options to pick from, with a weeks notice and advised that we should spend no more that 3–5 hours on the task (wink, wink)

I guess as a programmer I'm too logical. You tell me X hours, that's all I'm going to spend. For one of my past interviews I was given a task to make a multiplayer battleship game using whatever I wanted, and was told to spend 2 hours total and they didn't expect me to finish.

I got some rudimentary client/server communication going and that was it. No game logic at all.

Didn't get the job: 'However, we would have liked to have seen you get further on the project in the same amount of time.'

djsumdog(1146) 4 days ago [-]

> you're about to dedicate up to 5 working days on this task, with the potential for them to ghost you straight afterwards

I often give companies a list of open current open source tasks I'm working on. Pick one and I'll complete it for your interview. Every singe company turned me down, except one, who just accepted one of the tools I had on Github and examined that.

fendy3002(10000) 4 days ago [-]

Ha, next time companies should do it, then continue with those who declined

wingerlang(4065) 4 days ago [-]

>> Another, a task requesting Visual Studio (iOS devs have no need for any experience with this)

Ironically enough I just moved to another iOS job and have been sitting in VS for the better part of the last couple of days. Some shared systems are written in C# so all developers basically have to use it at some points.

I agree it shouldn't be a part of the interview though.

gwbas1c(3828) 4 days ago [-]

When I got something like that, I just ghosted. It's one of the very few times I ghosted an interview.

seanmcdirmid(2131) 4 days ago [-]

I was given a take home exam that I took one look at and said nope, I wouldn't enjoy working for the company. I actually appreciated this approach as a quick way to filter out job opportunities.

adwww(10000) 4 days ago [-]

I recently Skype interviewed for a local startup run by a former Googler. He was very proud how much the interview process was based on Google's, with multiple stages to ensure they get the highest quality developers.

The startup sounded interesting, and I might have been prepared to spend the recommended 5 hours on the code test, had I had a chance to actually go into the office and meet the team...!

At least at the end of a Google interview you get to work for Google.

vincentmarle(10000) 4 days ago [-]

To be fair, even though the hourly expectations are often unrealistic, I much prefer a coding challenge (as a former hackathon goer, I likes me a challenge) over a sweaty nervous "whiteboard session" where they nerd grill you on some irrelevant algorithm questions. Unfortunately, most companies do both.

irrational(10000) 4 days ago [-]

We've joked about giving potential hires a challenge to fix a problem we are currently experiencing, but we wouldn't really do it. That takes some balls to ask interviewees to write or fix production code.

jogjayr(3913) 4 days ago [-]

I once had to do a full-stack coding task for a YC company - I think it was something like build Tic-Tac-Toe with backend validation and storage of game state. I built it, and instead of being called onsite, I had a phone call with an engineer. I expected to discuss my solution with the engineer and prepared accordingly, but they did not mention the assignment at all, and proceeded to ask me algorithm and database trivia questions. I did not get the job.

wycy(10000) 4 days ago [-]

Apple's Reminders app is so glitchy that your interview task probably would've been vastly superior to the actual consumer product.

Damogran6(10000) 4 days ago [-]

Had a similar security interview with a healthcare company. Three tabletop exercises that were all dumpster fires (no controls, no logs, no ability to research, etc.)...it was at that time I REALLY looked at the interviewers and noticed just how burnt-out they looked.

You can learn a lot about a company's pain points by the questions they ask during the interview...chances are they're problems they're struggling with at the moment.

ignoramous(3531) 4 days ago [-]

A lot of people don't realise it but a lot of the tasks you do for you employer could very well have been a startup. There's a reason your employer has you working on those problems: They know what their clients want.

Cisco is famous for funding startups by its fromer employees that satisfy their client wants. Google might be trying something similar with Area120?

Being successful at startups is although an entirely different matter, and I guess that's why most people don't take up the opportunity. The paychecks are golden handcuffs, for some.

I think, these pg essays are relevant:


> The component of entrepreneurship that really matters is domain expertise. The way to become Larry Page was to become an expert on search. And the way to become an expert on search was to be driven by genuine curiosity, not some ulterior motive.


> At its best, starting a startup is merely an ulterior motive for curiosity.



> ...you don't need a brilliant idea to start a startup around. The way a startup makes money is to offer people better technology than they have now. But what people have now is often so bad that it doesn't take brilliance to do better.


> The best odds are in niche markets. Since startups make money by offering people something better than they had before, the best opportunities are where things suck most. And it would be hard to find a place where things suck more than in corporate IT departments. You would not believe the amount of money companies spend on software, and the crap they get in return. This imbalance equals opportunity.

> If you want ideas for startups, one of the most valuable things you could do is find a middle-sized non-technology company and spend a couple weeks just watching what they do with computers. Most good hackers have no more idea of the horrors perpetrated in these places than rich Americans do of what goes on in Brazilian slums.

> Start by writing software for smaller companies, because it's easier to sell to them. It's worth so much to sell stuff to big companies that the people selling them the crap they currently use spend a lot of time and money to do it. And while you can outhack Oracle with one frontal lobe tied behind your back, you can't outsell an Oracle salesman. So if you want to win through better technology, aim at smaller customers.

naveen99(3945) 2 days ago [-]

sorry for being offtopic: i see you list people you follow in your profile. Are you using a 3rd party service to follow them, or using the hn api yourself ?

bitL(10000) 4 days ago [-]

Are you sure you didn't sign any NDA, preventing you from working on ideas you might get during interview? Google used to have pretty water-tight policy on outflow of ideas (but inflow was encouraged) during interviews.

candu(4043) 4 days ago [-]

(Usual disclaimer: IANAL, just a person who's worked / consulted several places and signed several NDAs in his life.)

Here's the thing: the NDA you sign when you go onsite covers the company's intellectual property. It might cover the interview questions they ask you, but if they're not paying you for the work you do as part of a take-home interview question, and if you're doing it on your own hardware - it's not theirs, it's yours, and they have absolutely no say in what you do with that work afterwards.

Moreover: if you are subsequently hired by the company, they still don't own the work you did on your own time and hardware. (However: if you continue to work on it while employed by them, they might then own that follow-up work. Check your employment contract carefully, and if you're in a position to insist that employers do not own your off-hours work, insist away.)

Now, if you're currently at another company, and you complete the interview task on their hardware, and especially if you make the mistake of doing that work during company work hours...well, your current employer might then own your interview work. This is a matter usually covered by your employment contract (and another excellent reason to read those carefully).

saagarjha(10000) 4 days ago [-]

Does Google make you sign an NDA before you interview?!

RickS(4128) 4 days ago [-]

They have nothing of the sort, as of last year. The ask politely that you not share the specific design prompts or your solutions, so that future candidates can't copy you, but they're all over medium anyway. There are no legal teeth behind the request.

This applies to the design prompt specifically, which is an early-stage take-home thing. I don't recall whether google had an NDA for the onsite interview portion itself, but I think their badge signin thing might have one baked in, and other companies of equivalent size had paper NDAs as part of checkin. Presumably they only care about you stealing the cool stuff you hear about, and you wouldn't be in danger for running with their (intentionally generic) interview prompts on your own.

yitchelle(493) 4 days ago [-]

An interesting aside is if the NDA is bidirectional? In most interviews, the candidate would discuss their past experiences and past projects, which could contain sensitive information via unintentional leaking. Perhaps even a side project that the candidate is working could be covered with NDA. Thoughts?

rdez6173(10000) 4 days ago [-]

I am a software engineer and I was once asked during an interview at a large hedge fund to pick a side and debate why war is justified.

When I pressed them about the relevance, they indicated that they often have heated debates on all manner of topics, so they wanted to see my thought process.

I enjoy solving complex problems, but socio-ethical problems are way outside of my wheelhouse.

I politely indicated that I didn't think the company was a good fit for me.

jcadam(4115) 4 days ago [-]

> I am a software engineer and I was once asked during an interview at a large hedge fund to pick a side and debate why war is justified.

Heh, I actually wouldn't have minded that too much - but in addition to being a software engineer I'm also a former Army officer.

In any case, I think such a task can be relevant, if you're working in a fast-paced and competitive environment (esp. one with a lot of non-technical staff) you need to be able to hold your own in an argument. You wouldn't want to be the guy who is always right but gets overruled 99% of the time because you're unable to persuade others.

> ...but socio-ethical problems are way outside of my wheelhouse.

Mine too, and probably 99%+ of the world's population. But that doesn't stop most people from having strong opinions on subjects they don't understand.

gav(4130) 4 days ago [-]

> When I pressed them about the relevance, they indicated that they often have heated debates on all manner of topics, so they wanted to see my thought process.

One of my favorite interviews was when I was posed the question: 'you work for the railroad, we've just spent several months asking our customers how they feel about the service; the majority of them are unhappy, what do you do?'.

I spent an enjoyable hour in front of a whiteboard working on ideas with the interviewer.

acomjean(10000) 4 days ago [-]

Yeah. I had an interview with a 2 tests. One had a you check a list of personal attributes you see about yourself (diligent, careful, fun, approachable.....) Second side of paper, same list of terms "check all that others see you as". Then a one hour paper coding type exam.

I've also done a one hour codility online coding test more recently. I thought I'd hate it but frankly it's wasnt over the top hard and kinda fun.

justbaker(4128) 4 days ago [-]

> ...but socio-ethical problems are way outside of my wheelhouse.

Ever more reason to go outside ones comfort zone.

dentemple(10000) 4 days ago [-]

I'm pretty sure they were to trying to get a sense of your political leanings, without actually saying so.

conanbatt(4129) 4 days ago [-]

What a 'culture fit' filter that looks like

robocat(4118) 4 days ago [-]

They are simply stating one strong aspect of their workplace culture.

You self-selected out, because you didn't like their culture.

Seems like a great outcome for both parties!

gwbas1c(3828) 4 days ago [-]

I would think that's completely relevant for a hedge fund. They probably need to debate investments like this all the time.

starpilot(2850) 4 days ago [-]

In all fairness, there have been so many articles about the craziness of that hedge fund, you should have known what you were getting into. Better to find that out in the interview.

officehero(10000) 4 days ago [-]

The correct answer in this case is of course YES, it's justified. It's a hedge fund after all. Motivate with evolutionary theory starting with Darwin and ending with Nietzsche, maybe even bring in Sun Tzu depending on interviewer.

Loughla(10000) 4 days ago [-]

How is that even remotely related to the job at hand?

Why would you sabotage possible good candidates just so you can get your needless debate rocks off?

yumraj(3480) 4 days ago [-]

I love that question, and may use it, or a variant, in one of my future interviews.

My reasoning: I would like my team members to be able to contribute in all aspects of product development. This includes not only engineering decisions but also work with the product managers in identifying both potential new features as well as short comings or issues with some requirements. It also tells me how narrow or broad a particular person's knowledge base is.

Let me try to put it in other words: it may tell me if a particular frontend engineer would be open to exploring backend development? What if the language stack changes, will they be open to exploring new languages, stacks, frameworks, OS, platforms. Or, will they be limited to what they know. Will they be able to just code what they're told, or will they be able to form and express their opinion about yet unknown things.

agrippanux(4118) 4 days ago [-]

When interviewing execs who would be building new departments a few years ago, I would pose the question of 'In the wake of Arab spring in Libya, the people decide that you are their next leader. What happens in your first 100 days?'.

The answers ran the gamut from lazy to fantastic to terrifying. A lot of answers where generic 'coalition building' variety. The better answers identified key areas to focus on like infrastructure, basic services, etc. The best answers had clear goals and possible government structures supporting accountability.

Bad answers had the exec consolidating power and crushing opposition. The worst answers had the exec killing people to achieve their goals. Not joking, I had several answers that where 'I would find my rivals and kill them'.

Overall I thought it was a good question as those who performed well on it and where hired built great sustainable orgs and those who did poorly where usually shown the door within a year. Those who did well where able to take a crazy situation, break it down into smaller problems, and then solve for them while those who did poorly where usually relying either escaping the problem via committees or flat out crushing opposition.

hguant(10000) 4 days ago [-]

Sounds like they've bought into the Heinlein/Rand general specialist myth (quote below) and were looking for that. Problem is, it's very easy to mistake lunatic confidence for competence, especially in an interview, and especially given the personality traits that work well in hedge funds/general finance.

>"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects."

davidbanham(3684) 4 days ago [-]

> You have '4–6 hours'... No one acknowledges the fact that in reality, you're about to dedicate up to 5 working days on this task

I fixed that problem for my candidates then turned it into a startup:


greysteil(4032) 4 days ago [-]

This is a really neat idea.

pbhjpbhj(3911) 4 days ago [-]

Meta: I assume you're doing a website update at the moment. I prefer the blue! Though I might still use a white version on the homepage to remove emphasis from the header. Also, on you signup page it would IMO look nicer if you pinned the page footer (eg https://matthewjamestaylor.com/bottom-footer).

ashelmire(10000) 3 days ago [-]

Good. No more 4 hours challenges with unlimited scope.

amelius(883) 4 days ago [-]

> You have '4–6 hours' to design a slick product, with a memorable brand and cohesive working method. No one acknowledges the fact that in reality, you're about to dedicate up to 5 working days on this task, with the potential for them to ghost you straight afterwards.

To make this more fair, why not let the designer work at the company's office, so the interviewer can actually see how much time it took, and the interviewee doesn't waste more time than necessary?

smeyer(10000) 4 days ago [-]

My employer has sometimes given out challenges that take a few hours. We just ask the candidate when they'd like to do the take-home and email it to them on that schedule. Then they have to submit their solution within the allotted time. It's a nice way to make sure the candidate knows it really only will take a couple hours, because neither side wants the candidate to spend 30 hours on a 3 hour take-home.

itronitron(4126) 4 days ago [-]

>> advised that we should spend no more that 3–5 hours on the task

I have to wonder if they didn't get the position because they spent more than the advised time for the task.

thomasahle(4076) 4 days ago [-]

> To make this more fair, why not let the designer work at the company's office, so the interviewer can actually see how much time it took, and the interviewee doesn't waste more time than necessary?

I had an interview once where we were on the phone, I got the task, and they told me they'd call me back 2 hours later.

Might be easier if the office is far away.

hbosch(3842) 4 days ago [-]

FTA: You have '4–6 hours' to design a slick product, with a memorable brand and cohesive working method.

I don't think Google necessarily requested a full end to end product. A nice logo/app icon for the "branding" portion and some UX concepts can definitely be fleshed out in a single work day, especially with zero tech/biz requirements IMO.

Maybe you'd need to take an extra day just to brainstorm first... but actually pushing pixels, if this designer compressed his output a bit, could have happened in 6 hours.

meheleventyone(10000) 4 days ago [-]

These tests are being used as a pre-filter to avoid the expense of bringing someone into the office.

gorkish(10000) 4 days ago [-]

It's not supposed to be fair. Interview questions are designed to differentiate candidates as quickly and broadly as possible. This is one where the notional goal of completing the project is different than the true goal of seeing how the candidate performs when given an impossible scenario.

The author of this article fucked up when he tried to pass off 5 days of work as an example of what he could accomplish in 5 hours. I am glad he was able to find inspiration from it, but for the success of his startup I hope he also is able to understand other, better ways he could have approached the interview task.

mrisoli(10000) 4 days ago [-]

I've done that a few time actually.

My last company they flew me from another continent to do a 3-days assessment, I worked with the team on a legitimate feature(that would be trashed after as they couldn't use the code because I was unpaid). I liked it, I met the team, I interacted with them, I did the task and got a job offer in the end.

All engineers at the company highly valued the 3 days assessment because we got to select great people to work with, but we were getting too many candidates giving up or false negatives, they eventually cut it down to 1-day despite our protests.

Insanity(3647) 4 days ago [-]

That's what's I thought too. At one of the companies I interviewed at, we had a programming task that could take max 4 hours. The interview was at the company itself.

Less convenient for people who have a different job at the same time though.

djhaskin987(4098) 4 days ago [-]

Is HN truly this bad of a bubble that everyone thinks interview tests are normal?

The last time I had to do a test it was on site and I was just out of college. Most interviews just talk about the challenges of the team, my past work on similar problems, and how I solved them. I've been on the other side of the table too. It's usually easy to tell if the applicant is a quack or if they are skilled by how they talk about different technology: commiserating about familiar pain points, how they solved common problems. I almost feel at this point that it's unprofessional to hand out tests to people.

ep103(4121) 4 days ago [-]

My company tends to hire more junior developers, I've been pushing for years to raise our requirements for certain teams, but its a cultural understanding problem from people at the top. Coding tests are the only strong indicator I've found for the interviewing process. I've had candidates talk their way through all of the other steps, only to find they can't code their way out of a paper bag. I really think the only accurate way to tell how well someone codes, is by looking at their code, and a coding test is a good way of not only doing that, but also comparing code from one developer to another in a controlled manner. I'll take it over hackerrank and other similar shitty websites any day.

thomascgalvin(10000) 4 days ago [-]

> It's usually easy to tell if the applicant is a quack or if they are skilled by how they talk about different technology: commiserating about familiar pain points, how they solved common problems. I almost feel at this point that it's unprofessional to hand out tests to people.

The problem with what you're describing is that it's using a proxy (discussion) to vet a particular skill (ability to program).

It's very easy to vet someone's programming skill by having them program, and it doesn't take much longer than a wide ranging discussion, so why not skip the proxy and test what we're actually hiring for?

Should a programming test be several hours (or several days) long? No. But an hour or two, I can't see how that's a terrible imposition.

013a(10000) 4 days ago [-]

Yeah; I think the whole 'coding tests' thing is very much a Silicon Valley thing (which has of course been adopted by some non-SV companies, but its not the norm). I feel bad for HN readers who live there and that's all they know.

The best advice I've ever heard about interviewing is that you need to mentally reframe it from 'I need this job' to 'they need me, convince me to work here'. A big part of this is building a network and finding jobs through that (which is how everyone has always found careers, mind you); when you're introduced positively through a third party the team trusts, the conversation inevitably biases more toward the team hunting you, not the other way around.

And if you can't do any of that then you should probably be happy that companies do big project coding/design interviews that give you a deep chance to show off your skills. Over half of what teams are looking for in candidates isn't the hard skills; its social aptitude, teamwork, networking, the soft skills. If you don't have a network and had to land this interview via a lowest-common-denominator recruiting page or email, then at least you can dazzle them with a great project, but you're already starting out behind.

dnr(4131) 3 days ago [-]

I wish it were so, but it's not. I usually start interviews with 5-10 minutes of talking about a previous project of the candidate, digging in to see how sophisticated the work was, how they solved problems, etc. I've had plenty of examples where someone bullshitted well enough to pretty much convince me that they knew what they were talking about, only to have it become clear in the more hands-on part of the interview that they couldn't possibly. I admit I'm not a great interviewer, so perhaps a really good interviewer could tease those things out, but I maintain that it's not always easy.

I think the usual situation is that the candidate was on a team where some impressive work was being done by others, understood it enough to talk about it in detail, but didn't actually do any of the challenging work themselves and couldn't work at that level consistently.

I'd like to think most people aren't actively lying or bullshitting, but it's easy enough to implicitly take credit for work done by a team when you're trying to sound impressive.

trevor-e(3388) 4 days ago [-]

My team does a take-home coding test and we've found it to be very successful. The people we've hired as a result have been great and actually told us they preferred it to other interview methods they've encountered.

The challenge is coming up with a standalone task that demonstrates the skills needed for the job while _actually_ requiring the amount of time you say it should take. Seems like most commenters here are upset that a '2-hour test' really takes 6+ hours. That should never be the case and I wouldn't want to work for that place either.

Another key part of this is should a candidate pass the take-home test portion of the interview, the in-person interview should be fairly quick and mostly a judge of fit/character. It should not be loaded with more tech questions. The reason this works best for everyone is the interviewee doesn't need to waste a vacation day on-site unless they have a really good chance of getting the job.

midasz(10000) 4 days ago [-]

Am not in the USA, Netherlands actually.

For my internship I first had a good conversation with the lead developer. After it was determined that I was a culture fit and didn't bullshit them, I got a small simple assignment which I had to do on a whiteboard. Later found out it wasn't really meant to figure out whether or not I could write a simple function, but more to see what my thought process was. Like, did I instantly start writing one big function that did stuff - or did I break the problem down into smaller steps.

Anyways, the interview for my second job (for a fairly big consulting firm) was purely focused on culture fit, and questioning about stuff I've previously made.

Third and current job was the same as the second - only on my initiative I came by on another day to check out what the other developers were actually doing and trying to help them out.

I don't think a test is really useful, if I'm not up to par (or they don't meet my expectations) either of us can decide to end the contract in the first month. This focus on tests strikes me as silly, especially in the USA, since you can just be fired for no reason anyways.

edit: I do have an (outdated) github repo with some code, but not much else.

hessproject(10000) 4 days ago [-]

Just my two cents, but as a candidate I much prefer the tests to a whiteboard. I've had interview tests for a majority of my recent (senior level) positions in NYC, but the questions generally aren't to this extreme, certainly not basis for a new startup level problems.

More like 'we've laid out an API contract, implement these 3 endpoints in a language you like along with these few extra small requirements and some unit tests'. Usually takes 2-3 hours and I think it's a more fair assessment of my work and at least I get to use my own environment and work on my own time with tools I know rather than with someone sitting over my shoulder and a marker. It also helps drive the discussion during in-person interviews away from the binary tree puzzle type questions and focuses more on design decisions and other considerations I took in to account.

threwawasy1228(10000) 4 days ago [-]

When people are turning their interview questions into successful companies, maybe it is time to start asking some easier interview questions. This is a visceral demonstration of how absolutely ridiculous interviews have gotten.

macspoofing(10000) 4 days ago [-]

>maybe it is time to start asking some easier interview questions

Why? The big tech companies offer high salaries and get lots of applicants. They can afford to be picky.

And by the way, OP went above and beyond what they asked of him. In fact, there's a chance that this hurt his chances.

>This is a visceral demonstration of how absolutely ridiculous interviews have gotten.

This kind of format isn't standard across the industry.

bufferoverflow(3913) 4 days ago [-]

Google can afford to do that. They have so many applicants willing to jump through the hoops, they can pick the top 1-2%.

RickS(4128) 4 days ago [-]

I think that is crazy. You can turn laundry folding into a successful company. It's not that the interview question bar is high, it's that the 'what can you make a company' bar is effectively on the floor. This is good.

Also, OP launched on product hunt today. Calling them a 'successful company' is generous if not outright false, though I wish them every success.

Stated less dramatically, what happened here is 'I turned my design interview prompt into a real piece of software'. That doesn't imply that google's interview questions are too hard. A software company asking for a deliverable that approximates software is sane.

I think the time that google demands for their take-homes is a bit much (explicitly, this is 4-6 hours. in practice, it's 20 to 30 if you want to deliver at high caliber). But the questions/prompts themselves are intentionally quite boring, and exist to see how far you can take it.

I found them to be pretty good. These kinds of questions depend somewhat on chemistry between candidate and prompt, so they ask more than one of them, which is great. The takehome I picked was the only one of the 3 offered I could even pretend to care about. Of the two asked onsite, one I felt I delivered on at only the most basic level, and one I think I could have credibly patented / raised a series A for were it a thing I cared to spend 2 years on.

AznHisoka(3486) 4 days ago [-]

Personally, I love interview questions with some real life practicality to it. Tell me a problem your company is facing now and I will be energized to give you a solution. Typical abstract questions like design a binary tree or solve an arbitrary puzzle don't interest me.

And sometimes its easy to memorize such solutions. Translating a real life problem into a computer science one requires some skill.

megaremote(10000) 3 days ago [-]

This startup is the equivalent of a task management app. I guess you can call that a startup, and app to help you manager your exercises. There are only a 1,000 others out there.

JacKTrocinskI(4076) 4 days ago [-]

I call them low self esteem corporations, the interview process requires you to constantly flatter them and tell them how much you'd like to work for them. I have too much respect for my own time, what's important is on my resume and the rest they can find out in a short interview, homework assignments and cover letters are plain silly.

swish_bob(10000) 4 days ago [-]

I once completed the task sent to me by a prospective employer, and then thanked them and said I wouldn't be submitting it, as it told me enough about the way they worked to know I wouldn't enjoy it.

Never forget that the interview process is two way ...

comboy(4098) 4 days ago [-]

Those few big companies probably have more than enough candidates. If you have too many candidates you need to adjust interview difficulty to still hire only as much as you need. I don't think these companies care that the task is too difficult for 99% of people, or that there are smart people who are not motivated enough to complete it as long as they can get as many smart and motivated people as they want.

That said, finding a job in IT if you are good at what you're doing is a dreamland currently, so declining if you think you won't enjoy working for a given company seems only natural.

Given dynamics above I think you're likely better off not working for one of those top few companies if you don't care about prestige and long term job stability.

ahoka(4065) 4 days ago [-]
saagarjha(10000) 4 days ago [-]

Are you complaining about the design of this app?

H1Supreme(10000) 4 days ago [-]

OP's design is much nicer than this. Maybe the '3-5 hours' requirement was hinting that it needed to be as 'material design generic' as possible. Which, this design clearly does.

zrail(1234) 3 days ago [-]

I recently completed an interview junket. One of the companies I really wanted to get into gave me a take home. I completed it in the time they said it would take, submitted it, and then got bounced out. My inside source said that the submissions are run through an acceptance test suite (not exposed to the interviewee, of course) that can automatically bounce people out of the process with no or little human intervention.

PROTIP: don't do that.

boltzmannbrain(3580) 3 days ago [-]

Why not do that? Coming from the other side of table, an immense amount of engineer-hours are devoted to the interviewing process, including code review on take-home assignments. Thus we try to make the process efficient. Typically there's a set of unit tests provided in the assignment to help the interviewee, and a hold-out set of unit tests that gives us a score for basic correctness. The score, albeit a low bar, is a useful threshold for eliminating candidates unlikely to succeed in the process. Those above the threshold have their code passed on to an engineer to review beyond basic correctness, and move on in the interview process.

yingw787(4043) 3 days ago [-]

That doesn't sound fair unless the acceptance test suite is available to the candidate before final submission. In which case, depending on how the infrastructure is set up, could be expensive or slow. Seems to be a lose-lose situation.

JansjoFromIkea(4095) 4 days ago [-]

Is there a reason there's no naming and shaming going on here?

One company asked me to do an implementation of the Bay Area MUNI network which fetched live updates of the locations of all buses every 10 seconds; there was more to it but I can't remember it now. This was to be done in React with D3 and no additional libraries were to be used to help with integrating D3 maps in React. The role was junior level, the job listing did not specify D3 as a requirement (which pretty much guarantees a lot of people would start into it without realising maps are one of the more complicated things to do in D3 as a newbie or how poorly D3 played with React), the role was in Europe. If someone can confirm I'm not breaking any rules I'll happily share their name.

The best interview experiences I've had are with groups who just went through my (not very impressive) Github. A small coding challenge can be good as a thing to talk through during the final interview but anything longer than that and I'll tend to pitch sending them something I wanted to make in my own time regardless.

zymhan(3005) 4 days ago [-]

> Is there a reason there's no naming and shaming going on here?

No, there is no rule, unless you signed a contract stating you will never disclose any interview information, and even then, they'd have to find your post, find out who you are, and then take you to court over an extremely shaky premise.

Just name the damn company people! You're giving companies far more credit than they deserve in these situations. I'd argue you owe it to other people who may follow in your steps to know when a company is exploitative, deceptive, or just plain corrosive.

edit: now with less snark

josh2600(3864) 4 days ago [-]

Someone on my team once asked an iOS engineer to add a button to a codebase during an onsite interview. That is actually a horrible test to complete in 45 minutes if the codebase is large (and particularly hard if it's not super well maintained because even just familiarizing yourself with the codebase can take a great deal of time).

It's one of those things that sounds easy but really, really isn't possible to do in the time allotted. I learned a lot about interviewing people that day.

megaremote(10000) 3 days ago [-]

Eh, I would way prefer that. Much easier. I just refuse take home tests now.

ScottFree(4116) 4 days ago [-]

I can't tell if this is brilliant or sadistic. On the one hand, it really tells you who's smart enough to realize the task can't be done in the allotted time. On the other hand, not being able to do something as simple as adding a button in 45 minutes has to screw with the candidate's mind. That sort of thing would shatter what little confidence I have. I'd probably tank the rest of the interview.

> I learned a lot about interviewing people that day.

Could you share what you learned?

devit(10000) 4 days ago [-]

It seems totally possible to do it in less than 10 minutes as long as:

1. The button doesn't have to do anything

2. You give them a fast computer with an IDE loaded with the project

3. The developer is familiar with the language, UI framework and IDE

4. You aren't doing anything crazy and uncommon like using a non-standard UI toolkit or using an ad-hoc source code preprocessor

5. The project can be built and run with a single command and can be built and start in less than a minute

PorterDuff(10000) 4 days ago [-]

I rather like this idea in the sense that it tells you a lot about your own codebase. There is a lot of stuff out there that only makes sense to the author.

Historical Discussions: The struggles of an open source maintainer (May 17, 2019: 702 points)
The struggles of an open source maintainer (May 16, 2019: 9 points)

(702) The struggles of an open source maintainer

702 points 3 days ago by ngaut in 685th position

antirez.com | Estimated reading time – 9 minutes | comments | anchor

antirez 3 days ago. 80338 views. Months ago the maintainer of an OSS project in the sphere of system software, with quite a big and active community, wrote me an email saying that he struggles to continue maintaining his project after so many years, because of how much psychologically taxing such effort is. He was looking for advices from me, I'm not sure to be in the position of giving advices, however I told him I would write a blog post about what I think about the matter. Several weeks passed, and multiple times I started writing such post and stopped, because I didn't had the time to process the ideas for enough time. Now I think I was able to analyze myself to find answers inside my own weakness, struggles, and desire of freedom, that inevitably invades the human minds when they do some task, that also has some negative aspect, for a prolonged amount of time. Maintaining an open source project is also a lot of joy and fun and these latest ten years of my professional life are surely memorable, even if not the absolute best (I had more fun during my startup times after all). However here I'll focus on the negative side; simply make sure you don't get the feeling it is just that, there is also a lot of good in it. Flood effect I don't believe in acting fast, thinking fast, winning the competition on time and stuff like that. I don't like the world of constant lack of focus we live in, because of social networks, chats, emails, and a schedule full of activities. So when I used to receive an email about Redis back in the early times of the project, when I still had plenty of time, I was able to focus on what the author of the message was trying to tell me. Then I could recall the relevant part of Redis we were discussing, and finally reply with my real thoughts, after considering the matter with care. I believe this is how most people should work regardless of what their job is. When a software project reaches the popularity Redis reached, and at the same time once the communications between individuals are made so simple by the new social tools, and by your attitude to be "there" for users, the amount of messages, issues, pull requests, suggestions the authors receive will grow exponentially. At the same time, at least in the case of Redis, but I believe this to be a common problem, the amount of very qualified people that can look at such inputs from the community grows very slowly. This creates an obvious congestion. Most people try to address it in the wrong way: using pragmatism. Let's close the issue after two weeks of no original poster replies, after we ask some question. Close all the issues that are not very well specified. And other "inbox zero" solutions. The reality is that to process community feedbacks very well you have to take the time needed, otherwise you will just pretend your project has a small number of open issues. Having a lot of resources to hire core-level experts for each Redis subsystem, to work at OSS full time, would work but is not feasible. So what happens? That you start to prioritize more and more what to look at and what not. And you feel you are a piece of shit at ignoring so many things and people, and also the contributor believes you don't care about what others have to give you. It's a complex situation. Usually the end result is to develop an attitude to mostly address critical issues and disregard all the new stuff, since new stuff are yet not in the core, and who wants to have a larger code base with even more PRs and issues there? Maybe also written in a more convoluted way compared to your usual programming style, so, more complexity, and good luck when there is a critical bug there to track the root cause. Role shifting As a result of the "flood effect" problem exposed above, you suddenly also change job. Redis became popular because I supposedly am able to design and write software. And now instead most of the work I do is to look at issues and pull requests, and I also feel that I could do better many of the contributions I receive. Some will be better quality than I could do, because there are also better programmers than me contributing to Redis, but *most* for the nature of big numbers will be average contributions that are just written to solve a given problem that was contingent for the folks that submitted it. While, when I design for Redis, I tend to think at Redis as a whole, because it's years I write this thing. So what you were good at, you have no longer time to do. This in turn means less organic big new features. My solution with that? Sometimes I just stop looking at issues and PRs for weeks, because I'm coding or designing: that is the work I really love and enjoy. However this in turn creates ways more pressure on me, psychologically. To do what I love and I can do well I've to feel like shit. Time There are two problems related to working at the same project for a prolonged amount of time, at least for me. First, before of the Redis experience I *never* worked every week day of my life. I could work one week, stop two, then work one month, then disappear for other two months. Always. People need to recharge, get new energy and ideas, to do creative work. And programming at high level is a fucking creative job. Redis itself was created like that for the first two years, that is, when the project evolved at the fastest speed. Because the sum of the productivity of me working just when I want is greater than the productivity I've when I'm forced to work every day in a steady way. However my work ethics allowed me to have a very discontinue schedule when I was working alone with my companies. Once I started to receive money to work at Redis, it was no longer possible for my ethics to have my past pattern, so I started to force myself to work under normal schedules. This for me is a huge struggle, for many years at this point. Moreover I'm sure I'm doing less than I could because of that, but this is how things work. I never found a way to solve this problem. I could say Redis Labs that I want to return to my old schedule, but that would not work because at this point I really "report" to the community, not to the company. Another problem is that working a lot at the same project is also a complex matter, mentally speaking. I used to change project every six months in the past. Now for ten years I did the same thing. In that regard I tried to save my sanity by having sub-projects inside Redis. One time I did Cluster, another time disk-storage (now abandoned), another was HyerLogLogs, and so forth. Basically things that bring value to the project but that, in isolation, are other stuff. But eventually you have to return back to the issues and PRs page and address the same things every day. "Replica is disconnecting because of a timeout" or whatever. Let's investigate that again. Fear I always had some fear to lose the technological leadership of the project. Not because I think I'm not good enough at designing and evolving Redis, but because I know my ways are not aligned with: 1) what a sizable amount of users want. 2) what most people in IT believe software is. So I had to constantly balance between what I believe to be good design, set of features, speed of development (slow), size of the project (minimal), and what I was expected to deliver by most of the user base. Fortunately there is a percentage of Redis users that understand perfectly the Redis-way, so at least from time to time I can get some word of comfort. Frictions Certain people are total assholes. They are everywhere, it is natural and if you ask me, I even believe in programming there are a lot more nice people than in other fields. But yet you'll always see a percentage of total jerks. As the leader of a popular OSS project, in one way or the other you'll have to confront with these people, and that's maybe one of the most stressful things I ever did in the course of the Redis development. Futileness Sometimes I believe that software, while great, will never be huge like writing a book that will survive for centuries. Note because it is not as great per-se, but because as a side effect it is also useful... and will be replaced when something more useful is around. I would like to have time to do other activities as well. So sometimes I believe all I'm doing is, in the end, futile. We'll design and write systems, and new systems will emerge; but anyone that just stays in software, instead of staying in "software big ideas", will ever set a new mark? From time to time I think I had potentially the ability to work at big ideas but because I focused on writing software instead of thinking about software, I was not able to use my potential in that regard. This is basically the contrary of the impostor syndrome, so I guess I've a big idea of myself: sorry for that I should be more humble. That said, I was able to work for many years doing things I really loved, that gave me friends, recognition, money, so I don't want to say it was a bad deal. Yet I totally understand people struggling a lot to stay afloat once their projects start to be popular. This blog post is dedicated to them. Please enable JavaScript to view the comments powered by Disqus.
rss feed | twitter | google group | old site:

All Comments: [-] | anchor

Sir_Cmpwn(295) 3 days ago [-]

I don't maintain anything as big as Redis, but I've faced many similar problems all the same and I think I have an approach which makes it palatable. I wrote about my approach at length here:


But the main thing is that almost all bug reports, feature requests, and so on, get sent to /dev/null. Users who care about a problem are expected to work on that problem themselves. In the case of software like Redis, pretty much everyone reporting a bug is also qualified to fix that bug, so it works particularly well.

Then I focus only on helping new contributors get their bearings and making regular contributors happy and comfortable with their work on the project. So far this approach has been very successful for me - I don't get burned out, and neither do my contributors, and we have happy, healthy communities where people work at a pace which suits them best and aren't stressed or overwhelmed.

Sure, lots of feature requests and bug reports get neglected, but I think the net result is still a very positive impact on the project. The occasional drive-by bug submitter provides far less value to the project as someone who writes even one patch. Focusing on keeping the people who create the most value happy makes for a more productive project and a better end result. Some people are put out by the fact that their bug report, feature request, etc goes unanswered, but I can quickly put the guilt out of my mind by reminding myself that ignoring them is doing a benefit to the project. And in practice, I generally have time to give people some words of encouragement and a nudge in the right direction towards writing a patch without burning myself out.

tda(10000) 3 days ago [-]

I found the writings of Peter Hintjens (creator of ZeroMQ) very inspiring, where he describes the merits of Optimistic merging (OM). Quoting at length from http://hintjens.com/blog:106:

Standard practice (Pessimistic Merging, or PM) is to wait until CI is done, then do a code review, then test the patch on a branch, and then provide feedback to the author. The author can then fix the patch and the test/review cycle starts again. At this stage the maintainer can (and often does) make value judgments such as 'I don't like how you do this' or 'this doesn't fit with our project vision.'

In the worst case, patches can wait for weeks, or months, to be accepted. Or they are never accepted. Or, they are rejected with various excuses and argumentation.

PM is how most projects work, and I believe most projects get it wrong. Let me start by listing the problems PM creates:

    It tells new contributors, 'guilty until proven innocent,' which is a negative message that creates negative emotions. Contributors who feel unwelcome will always look for alternatives. Driving away contributors is bad. Making slow, quiet enemies is worse.
    It gives maintainers power over new contributors, which many maintainers abuse. This abuse can be subconscious. Yet it is widespread. Maintainers inherently strive to remain important in their project. If they can keep out potential competitors by delaying and blocking their patches, they will.
    It opens the door to discrimination. One can argue, a project belongs to its maintainers, so they can choose who they want to work with. My response is: projects that are not aggressively inclusive will die, and deserve to die.
    It slows down the learning cycle. Innovation demands rapid experiment-failure-success cycles. Someone identifies a problem or inefficiency in a product. Someone proposes a fix. The fix is tested and works or fails. We have learned something new. The faster this cycle happens, the faster and more accurately the project can move.
    It gives outsiders the chance to troll the project. It is a simple as raising an objection to a new patch. 'I don't like this code.' Discussions over details can use up much more effort than writing code. It is far cheaper to attack a patch than to make one. These economics favor the trolls and punish the honest contributors.
    It puts the burden of work on individual contributors, which is ironic and sad for open source. We want to work together yet we're told to fix our work alone.
Now let's see how this works when we use Optimistic Merge. To start with, understand that not all patches nor all contributors are the same. We see at least four main cases in our open source projects:

    Good contributors who know the rules and write excellent, perfect patches.
    Good contributors who make mistakes, and who write useful yet broken patches.
    Mediocre contributors who make patches that no-one notices or cares about.
    Trollish contributors who ignore the rules, and who write toxic patches.
PM assumes all patches are toxic until proven good (4). Whereas in reality most patches tend to be useful, and worth improving (2).

Let's see how each scenario works, with PM and OM:

    PM: depending on unspecified, arbitrary criteria, patch may be merged rapidly or slowly. At least sometimes, a good contributor will be left with bad feelings. OM: good contributors feel happy and appreciated, and continue to provide excellent patches until they are done using the project.
    PM: contributor retreats, fixes patch, comes back somewhat humiliated. OM: second contributor joins in to help first fix their patch. We get a short, happy patch party. New contributor now has a coach and friend in the project.
    PM: we get a flamewar and everyone wonders why the community is so hostile. OM: the mediocre contributor is largely ignored. If patch needs fixing, it'll happen rapidly. Contributor loses interest and eventually the patch is reverted.
    PM: we get a flamewar which troll wins by sheer force of argument. Community explodes in fight-or-flee emotions. Bad patches get pushed through. OM: existing contributor immediately reverts the patch. There is no discussion. Troll may try again, and eventually may be banned. Toxic patches remain in git history forever.
In each case, OM has a better outcome than PM.

In the majority case (patches that need further work), Optimistic Merge creates the conditions for mentoring and coaching. And indeed this is what we see in ZeroMQ projects, and is one of the reasons they are such fun to work on.

MakeUsersWant(10000) 3 days ago [-]

So that is why so many OSS projects are full of bugs and horrible usability. In the past, I have tried to help with clear, reproducible bug reports. Quite often I spent an hour tracking down a bug, writing up why it matters, and then got ignored or dismissed.

Those OSS projects that deliver a quality product, also took my bug reports seriously (and sometimes disagreed).

outime(10000) 3 days ago [-]

>In the case of software like Redis, pretty much everyone reporting a bug is also qualified to fix that bug, so it works particularly well.

There are a good bunch of people who have never written any C and even more people who have no idea about the internals of Redis, yet they are Redis users and they do find bugs. I wouldn't expect those users to write a patch for every bug they find to be honest.

alias_neo(10000) 3 days ago [-]

I'm interested in your approach here because I generally disagree with ignoring bug submissions from 'drive-by's.

As a developer, I might have the time to investigate and submit a bug for your code, but at the end of the day, is probably just one more bug in 100 other pieces of code preventing me from writing the code I need to.

Getting familiar with the code of a project you're using, to the extent that your can submit a reasonable PR is a not-insignificant time dedication.

I've found a variety of attitudes from upstream code I've reported to, ranging from 'you're wrong' to the bug being fixed and pushed to master within an hour.

I've formed an opinion, amongst the projects I work with, about whom I should give more of my (professional) time to, that usually begins with how receptive they are to a perfectly good bug report which I don't have the (professional) time to fix.

The ones that have proven 'pleasant' to work with have made it much easier for me to convince my bosses that their money is well spent giving back.

sheetjs(3241) 3 days ago [-]

We (https://sheetjs.com) maintain some reasonably popular projects (our most popular, https://github.com/SheetJS/js-xlsx/, has over 15K stars and sees millions of downloads per month).

It's important to remember why you are involved in open source. If those circumstances change, you should ask why you continue to remain involved. As soon as you lack a satisfactory answer, it's your cue to stop.

Many large open source projects start out as a passion project or a solution to a specific problem that the original developers faced. Over time, other people face similar issues and rally around your solution and it is really easy to fall into the trap of bearing their burdens. This is a bad response. Other people using your open source offerings do not create any sort of obligation on your part to care or respond to their concerns.

If someone really cares enough, they will incentivize your continued effort (like money or other considerations). It is unfortunately cultural taboo to ask some of the more vocal critics to pay, but setting up that dialogue at least shuts down most of the comments.

IMHO the origin of most of these issues comes from the very thing that drives many people to open source in the first place: personal branding. If you remove your personal identity from the equation, it's a lot easier to 'turn off notifications'. Rants and criticisms are directed at this mystery character, not you personally. Since you are not personally tied to the project, working on open source feels like a distinct activity and is judged on its merits. You don't feel the same sense of obligations since you don't personally feel like you are disappointing the user base.

Sean-Der(4064) 3 days ago [-]

The sunk cost (maybe wrong term) is really tough. It is hard to walk away because you have put so much effort in. Especially if you don't have a company behind a project it will probably falter.

I really enjoy Open Source 99.99% of the time. But that 0.01% I just remember how much work I put in.

ralphstodomingo(10000) 3 days ago [-]

Off-topic: I've used js-xlsx before, and didn't have any problems with it. Thank you for your efforts.

chii(3812) 3 days ago [-]

> trap of bearing their burdens

yep, don't bear other people's burdens without asking for money, or contributions (of labour - such as code)

Sean-Der(4064) 3 days ago [-]

I have only been doing Pion WebRTC(https://github.com/pion/webrtc) for a year, but the hardest ones for me have been

* Relationships tied to the project

It sort of burned me out when the first contributors moved on. I have always been on the other side though, interests change or you change jobs.

* Every user issue is urgent.

It is hard figuring out what the most important thing to work on is. If a user takes time to file an issue it is causing them issues. I just have been on the other side and it doesn't feel great to be ignored.

* Community members with different communication styles

People aren't so much assholes, but just communicate differently. It is really hard to mediate, no one is at fault but I just hate seeing people leave/get burned out from it.

imetatroll(10000) 3 days ago [-]

This is really nice. I wrote my own (probably bad) webrtc implementation for a personal project using gopherjs months ago. If I ever switch that project to webassembly, I will take a look at pion. Thanks for your efforts!

cerberusss(10000) 3 days ago [-]

As to hard figuring out what the most important thing to work on is:

I have some code on Github and got a change request. I just replied that I can do it for my currently hourly rate. That makes priorities clear.

souprock(10000) 3 days ago [-]

He missed a big one: you have no way to stop Linux distributions from hacking up your software, and you'll suffer the consequences of whatever they do.

I hit this with procps. (a package with ps, top, vmstat, free, kill...) It was horrifically demotivating, helping to end my involvement as the maintainer from roughly 1997 to 2007. (the other big issue was real life intruding, with me joining a start-up and having 5 kids)

I had plans for command line option letters, carefully paying attention to compatibility with other UNIX-like systems, and then Red Hat would come along and patch their package to block my plans. They were changing the interface without even asking for a letter allocation. I was then sort of stuck. I could ignore them, but then my users would have all sorts of compatibility problems and Red Hat would likely keep patching in their own allocation. I could accept the allocation, letting Red Hat have control over my interface.

Red Hat sometimes added major bugs. I'd get lots of complaints in my email. These would be a mystery until I figured out that the user had a buggy Red Hat change.

Patches would often be hoarded by Linux distributions. I used to regularly download packages and open them up to look for new crazy patches. Sometimes I took the patches, sometimes I ignored the patches, and sometimes a wrote my own versions. What I could never do was reject patches. The upstream software maintainer has no ability to do that.

The backlog of unresolved troubles of this sort kept growing, making me really miserable. Eventually I just gave up on trying to put out a new release. That was painful, since I'd written ps itself and being the maintainer had become part of my identity. Letting go was not easy.

Maybe it had to happen at some point, since I now have more than twice as many kids, but I will be forever bitter about how Red Hat didn't give a damn about the maintainer relationship.

bscphil(3900) 3 days ago [-]

One of the things I really like about Arch Linux is their policy of not making unrequired changes to upstream software. The only changes usually made are those necessary to keep the software compatible with the other software on a rolling release system, and these changes are usually temporary.

rurban(3289) 3 days ago [-]

I'm very happy with downstream patches. They come 90% from Redhat, rarely from Debian or Suse. The Redhat patches are usually 100x better than other contributors patches. Debian is usually the worst.

When I maintained ~250 cygwin packages I acted as downstream also, and had to deal with upstream maintainers also. Usually a horror, maybe because I came from cygwin which was a crazy hack. Only postgresql said 'such a nice port'. For perl, gcc, python,... you were just a crazy one to be ignored.

kashyapc(10000) 3 days ago [-]

[Disclosure: I'm a long-term Red Hatter.]

Hmm. That's painful indeed. Sorry that you had such a dispiriting experience with the 'procps' package.

For what it's worth, allow me to share my experience (I joined after 2008; so I can only speak about that period onwards) of being at Red Hat. One of the reasons that keep me at Red Hat is the iron-clad (with some sensible exceptions, e.g. security embargoes) principle of Upstream First.

I see every day (and the community can verify -- the source out it there) maintainers upholding that value. And I've seen several times over the years maintainers, including yours truly, vigorously reject requests of 'downstream-only' patches or other deviations from upstream. When there are occasional exceptions, they need extraordinary justifications; either that, or those downstream patches are irrelevant in context of upstream.

I've learnt enormously from observing inspiring maintainers at Red Hat (many are also upstream maintainers) on how to do the delicate Tango of balancing the upstream vs. downstream hats.

So if it's any consolation, please know that for every aberration, there are thousands of other packages that cooperate peacefully with relevant upstreams.

stickfigure(3038) 3 days ago [-]

As the parent of a startup and one singular demanding child, I have to ask... 10?! Wow! Nevermind why, I want to know how do you get anything done? Do you have staff? Do you have personal time to yourself, or with your wife? How many soccer games do you attend in a given week? How many bedrooms does your house have? How many of your kids are turning out to be programmers?

Forget maintaining software, I want to know how you maintain your existence. I don't think I could survive.

debiandev(10000) 3 days ago [-]

I've been packaging for Debian for a decade and I ran into a lot of unfriendly upstreams that would refuse to make any change to allow packaging and I've never seen other Debian Developers treating upstream developers poorly.

aaron-santos(10000) 3 days ago [-]

More of a general question: is there any license or addendum clause which could have remedied this? Linux distribution creators love adhering to specific license requirements. Would a choice between distributing vanilla software vs not distributing have been more appealing?

LameRubberDucky(10000) 3 days ago [-]

Wow. Are you Albert mentioned in the man page? Seems like some unneeded snark after your entry. I think I would have left too.

'Albert rewrote ps for full Unix98 and BSD support, along with some ugly hacks for obsolete and foreign syntax.'

cyphar(3643) 3 days ago [-]

That sounds really awful, and it's really disheartening to see the (ex)maintainer of such a core piece of Linux distributions being treated that way.

I work for a Linux distribution (SUSE) and I assure you that we don't all act this way. I've had to carry my fair share of downstream patches in openSUSE/SUSE packages, but I always make sure to submit the patches upstream in parallel. Quite a few people I know from Red Hat (and other distributions like Debian, Ubuntu, etc) do the same. It's possible that times have changed since then, or that it depends which package maintainers you are dealing with, but I hope it hasn't soured your opinion of all Linux distribution maintainers.

One thing that is a constant problem is that users of distributions keep submitting bugs to upstream bug-trackers. If there was one thing I wish I could change about user behaviour, it would be this. Aside from the fact that the bug might be in a downstream patch (resulting in needless spam of upstream), package maintainers are also usually better bug reporters than users because they are more familiar with the project and probably are already upstream contributors.

vertis(3836) 3 days ago [-]

Without trying to minimize this experience, I can't help but wonder if newer tools and techniques have mitigated some of this problem. For example GitHub pull requests make it much easier to collaborate on one version rather than an upstream, downstream relationship.

ddebernardy(3145) 3 days ago [-]

Similar story here. And it's not just distributions. There are web dev shops out there that edit OSS to add some functionality without even bothering to change the version or the user facing information. It makes for plenty of head scratchers when their customers report a bug in a piece of software you wrote.

h91wka(10000) 3 days ago [-]

> He missed a big one: you have no way to stop Linux distributions from hacking up your software, and you'll suffer the consequences of whatever they do.

I remember the frustration when I found that NixOS maintainers downright crippled certain build systems to force people to use Nix...

contingencies(3144) 3 days ago [-]

Haha! What are the odds? Maintainer of ps spawns too many subprocesses, runs up against parallelism... I think it's the Unix story in a nutshell.

agumonkey(944) 3 days ago [-]

Crazy how social relationship leaks back into the open source concept.

Sorry you had to endure all this.. not fun.

nerdbaggy(4131) 3 days ago [-]

I don't know why so many open source users feel so entitled when they have done nothing for the project. I believe that anybody can help a project from code, documentation, financial, graphic design, etc.

MakeUsersWant(10000) 3 days ago [-]

Most projects don't actually want help. Go and try it: report a few bugs or point out usabilty issues. See what happens.

bowmessage(10000) 3 days ago [-]

O/T, not meant to be rude, but under what moral framework do you find it reasonable to bring more than 10 children into this world?

cheschire(10000) 3 days ago [-]

I find that when people preface a statement with a negation, it has the opposite effect and draws focus to that aspect. 'With all due respect,' or 'no offense but,' or 'not to be rude but...'

dang(172) 3 days ago [-]

Please don't accost someone personally like this on HN.

We detached this subthread from https://news.ycombinator.com/item?id=19936320 and marked it off-topic.

bmh(10000) 3 days ago [-]

It seems like there's an opportunity for a tool to help solve this (ie something other than GitHub issues).

aeorgnoieang(4074) 3 days ago [-]

I'm sure different tools could help somewhat but ultimately anything replacing GitHub issues would still involve some kind of social, i.e. communication, structure and that's just a hard thing to build and maintain.

burtonator(2269) 3 days ago [-]

I'm threading the needle in a project I've been working on which is both Open Source and has premium features around cloud sync:


... it's been interesting. I'm trying to give everyone the best of both worlds but since I'm catering to a large user base I have to make sure to keep everyone happy.

Some people DO NOT want to do anything cloud. Some want their own cloud. Others want cloud and they want it easy.

I think part of what I'm learning is that in order to get the trust from users I'm going to either have to be around for a long time or get the thumbs up from other organizations that can vouch for my positive intentions.

For example, I want to write up a document about our commitment to Open Source but of course that takes work and I don't have a ton of time.

analognoise(4097) 3 days ago [-]

There was a pop-up that asked for money. The language of the bar at the top gives the impression the program will go away soon unless money comes in, plus the fact that someone was willing to inject pop-ups made it very difficult to trust - what if it goes commercial and someone is willing to inject ads or malware?

jakeogh(3173) 3 days ago [-]

Hey nice! I was not expecting to click that and get a working page without JS!

makecheck(3890) 3 days ago [-]

People seem to have a serious lack of understanding when making requests of others.

- When you send an E-mail, at least imagine that the recipient may have literally hundreds of other messages to dredge through and the time required to respond (and detail of response) may reflect that. Not personal, don't get mad at them.

- When you send an "instant message", it may be instant for you but for all you know the recipient is deep in the middle of something and won't respond for awhile. Not personal, don't get mad at them.

- The recipient may be on the other side of the planet. Err on the side of extra information so you don't wait days for a response that just has to ask you for more.

- When you file a bug, "thank you but do some homework". A person dealing with 1000 other things will not have time to hand-hold you through all the things you're not telling them yet. Be precise and complete. Be reasonable about when/if to expect a fix.

- And for that matter, in retail, or traffic, or 100 other things in life, you don't know as much as you think you do about what other people are dealing with. Stop for a second. Imagine their situation. That person not instantly serving you and only you has a dozen other things going on.

BeetleB(3962) 3 days ago [-]

>When you send an E-mail, at least imagine that the recipient may have literally hundreds of other messages to dredge through and the time required to respond (and detail of response) may reflect that. Not personal, don't get mad at them.

It's refreshing to see someone on HN say this. If you look at my comment history, you'll find a number of people who've replied to my comment who say it is incredibly rude not to answer someone's email promptly. My stance over the last few years has become: If anyone (including automated services) can put an email in my inbox, I am not obligated to read it. Until I can get a reliable heuristic that distinguishes well thought out emails from the crud, I don't feel obligated to spend my limited time on tending to my inbox. The only realistic solution is to have sending emails incur a cost, in order to curb the quantity of emails. Your sending me an email doesn't obligate me to read or respond to it.

I'm not an open source maintainer who gets a ton of support emails. Yet if I feel my stance is necessary for my sanity, imagine how much worse it is for those folks.

For IM's, at work, my status says something like: 'If you're physically in the building, come talk to me in person if urgent or send me an email if not. If you are remote, email me if not urgent, or let's set up a voice chat if urgent.'

The only useful thing about IM at the work place is in things like active debugging, or a coworker in an adjacent cubicle needing to send me a link related to a live conversation we're having. Other than those, it becomes a conversation that drags out and never ends. They'll often wait multiple minutes before responding to me (they are the ones who initiated the IM). Having IM windows open and randomly flashing takes up my mental space and distracts.

(Note: We do not use Slack).

(Note 2: My saying 'send email' contradicts with my first paragraph, which I wrote with my personal email in mind. The SNR is much higher for my work email).

senorjazz(10000) 3 days ago [-]

> - When you send an "instant message", it may be instant for you but for all you know the recipient is deep in the middle of something and won't respond for awhile. Not personal, don't get mad at them.

People don't get it. If they said a DM / IM or any program which is considered a 'chat' app they expect instant replies.

I run a single person software company. I used to allow people to contact me for support any and all chat programs. So for paying customers you can magnify that 'expect an instant reply' factor. People would get quite annoyed, regardless of whether it was Sunday, or 3am in the morning for me.

For that reason, only email and forum support is no offered. Never had a complaint since about wait times. People expect to wait after sending an email. They expect to wait for a forum post. They do not expect to wait for a chat / IM / DM response.

gppk(10000) 3 days ago [-]

I agree with a lot of these.

One thing i've noticed in life is that our use of aschynchronous communication methods is much more in a 'respond now' fashion.

Also as a contributor to StackOverflow your point about asking questions with the right amount of information is really spot on.

sergiotapia(789) 3 days ago [-]

I own a 2.3K star Github repository (https://github.com/sergiotapia/magnetissimo), that while trivial in it's implementation, it's pretty useful to a ton of people.

There are a few core contributors to the project now and a Discord server as well and everything in this article rings true to me.

It becomes an obligation of sorts and you definitely feel like you're letting people down, especially when it comes to people who have taken the time to contribute and share feedback, let alone people who actually open PRs.

What I think can help is having more core contributors with write permissions, share the load. Or be up front in your readme and say, 'I only check this once a quarter'.

simonebrunozzi(808) 3 days ago [-]

FYI, it seems that your https://sergio.dev/ is down.

stupidcar(4103) 3 days ago [-]

It's ironic that Git was intentionally designed to decentralise the process of software development, so that there was no single 'blessed' repository, but instead of a network of peers, each of which could evolve at a different rate and in a different way, with the community deciding which would 'win' in terms of popularity.

And yet, via Github, the community has reinvented the centralised model of software development. For any project, there is a single instance that is the 'blessed' version, by virtue of having the most stars or collaborators, or owning the associated package name. 99% of forks are nothing more than a place where you create a feature branch that you intend to PR to the blessed project.

I hope that, one day, somebody invents a git-based project ecosystem that solves the problem of decentralised project management. I don't know exactly what this would look like, but I think it would need to embed the idea of community and contributor consensus and democracy at a fundamental level, separating the concept of the project and project ownership from any individual fork.

sudhirj(3638) 3 days ago [-]

Git was designed to decentralise work, not power - they're very different.

Think about the author of Git himself - Linus. He didn't write Git because he decided one day that he was tired of being the final decision maker for the Linux kernel, and that he now wanted a thousand decentralized kernels to bloom, each with their own management and maintainers and trust levels.

It was designed to give everyone a way to work with full featured version control on their own machines, and set up adhoc networks of version control between any consenting parties. But that did not imply that the power structures that make up an organization would automatically be demolished.

Github is the extension of Git as it was designed. Maintainers on Github follow similar organzational policies as the Linux kernel and some are even codified into Github itself. But none of this is counter to the purpose of Git or even its ethos.

djangovm(3488) 3 days ago [-]

i am wondering this probably because of two factors:

1. accountability: where do I go looking if something is broken? 2. and discoverability: where to search, and whom to believe as being the correct owner of the copy of the codebase.

this is similar to block-chain currency (based on my limited understanding.. correct me if i am wrong). While it is meant to be decentralised, entities like coinbase effectively create a central entry/exit point somewhere in the ecosystem.

dooglius(10000) 3 days ago [-]

>Git was intentionally designed to decentralise the process of software development, so that there was no single 'blessed' repository

I don't think this is true. It was developed by Torvalds to maintain the Linux kernel, which has always had a 'blessed' repository at kernel.org.

albertzeyer(511) 3 days ago [-]

But it does help for decentralized development, or not? And you have this single instance of authority in almost all cases? E.g. for Linux itself, you also have that, and Git was developed for Linux.

I think the issue is more about how to structure pull requests, issues, etc., in a better way (maybe hierarchical), such that this matches the exponential number of more communication. GitHub does not help too much with this, but it also does not really fight against this. This is basically a problem every big project has to manage in some way. For comparison, I would look at other big projects like TensorFlow, Firefox, Chromium, Linux, CPython, etc.

AmericanChopper(10000) 3 days ago [-]

I think if your idea about decentralizing can accurately be described as fragmenting, then it's probably not a good thing. I don't know anybody that wants to have multiple separately maintained versions of the same project. When I use a piece of software, I want to have a good idea about how it will behave. I want to pick a project that I can be somewhat assured is fit for purpose, and that will provide stability. That's why people investing in maintaining and using the most widely used and adequately maintained projects. If I don't like the direction maintainers are taking a project in, then I can fork it. If enough people feel the same way as I do, then that fork might also become widely used and adequately maintained. There's plenty of examples of exactly that happening, and I can't see anything that's undemocratic about it.

Regarding having a centralised service like github, I can't see how that impacts the democraticness of anything at all. I can pull from it, I can push to it, I can fork on it. The centralizedness of it doesn't seem to have any impact on participation to me. It just seems to be a matter of convenience. The less places I have to look for the thing I want, the better my experience is.

nicpottier(2142) 3 days ago [-]

> Sometimes I believe that software, while great, will never be huge like writing a book that will survive for centuries. Note because it is not as great per-se, but because as a side effect it is also useful... and will be replaced when something more useful is around.

I once saw a sig somewhere that was something along the lines of the author's goal being of writing some piece of software that would outlive them. That seemed like a neat, but incredibly ambitious goal, very very few pieces of software will meet it.

That said, Redis is probably one of them. It isn't too hard to imagine that there will be Redis instances running somewhere in 40 years, and though I wish antirez a long life, I think it is likely there will still be Redis instances chugging along after he passes.

So be proud antirez! You've most certainly made a dent in the universe.

redisman(10000) 3 days ago [-]

Data migration is difficult. I'm sure there are a lot of very old databases still chugging along in some dusty office corner because no one wants to touch them since they've worked adequately for the last 30 years.

peterwwillis(2505) 3 days ago [-]

I think people are getting way too upset over GitHub features like issues & PRs, and the desire to please others. If you work on an open source project and you're not getting paid for it, you really need to divorce yourself from the demands of users. You have to assume that literally no one will use the software and just make it for yourself. If it becomes popular and you get feedback and contributions, great! Hopefully you can develop relationships that will lead to co-maintainers and such. But if it starts to feel like a struggle, just re-focus on what you want to get done, on your timeline.

thrower123(3377) 3 days ago [-]

Too many people aren't aware that you can say no, and not do something if you don't want to or if it is a bad idea. Especially if nobody is paying you.

Even for people that are paying you for a product, sometimes you really need to say no, that is how it works, and no, we are not going to customize it for your particular workflow.

mholt(1733) 3 days ago [-]

Man. I can relate. Open source projects are roller coasters. It's great when loads of people start using it... and terrible at the same time. It's validating to see people using your work, but sometimes you just want to submit your paper to NIPS rather than deal with what you think in the moment (and, frankly, in hindsight too) is a dumb issue: https://github.com/mholt/caddy/issues/1680#issuecomment-3027...

I wrote up my experience from one wave of negativity a couple years ago: https://caddy.community/t/the-realities-of-being-a-foss-main...

It still haunts me to this day, but my attitudes are finally trending more positive about the whole thing.

kbenson(3329) 3 days ago [-]

One of the major benefits of Open Source is that the user doesn't need to get a hold of the developer. It's always possible (or should be, if they're leveraging open source well) to patch a problem locally and build a fixed version their self. If they don't have the ability to do so, they really should at least have someone they can contact and pay (probably an exorbitant amount if it's an emergency) to do that for them. It's the open source equivalent of an application support contract or warranty.

nerdbaggy(4131) 3 days ago [-]

I feel for you guys. People lobe to get upset when open source projects try to make money.

tlrobinson(360) 3 days ago [-]

> Sometimes I believe that software, while great, will never be huge like writing a book that will survive for centuries. Note because it is not as great per-se, but because as a side effect it is also useful... and will be replaced when something more useful is around.

That's an interesting point and thought experiment: what piece of software currently exists that will still be widely used in a century? And what's the oldest software that's still being used currently?

trystero(10000) 3 days ago [-]


peterwwillis(2505) 3 days ago [-]

The oldest software still used is operated by banks, government agencies, schools and businesses. A large number of them are black boxes, and very expensive, so nobody will replace them. Their hardware just gets maintained ad infinitum, and the software doesn't care how old it is, so it just keeps running.

So basically, the age of software is limited by its hardware. (It's similar with animals: if nobody bumps them off and the hardware keeps ticking, the software probably will too... fish and mammals can live for hundreds of years)

BillGates432(10000) 3 days ago [-]

Spider Solitaire

sanxiyn(3195) 3 days ago [-]

> And what's the oldest software that's still being used currently?

I am very interested in this question, but much of software is proprietary and you don't have much visibility into their history.

For software in the public, trivial lower bound is GCC 1.0 in 1987. Another case I know is Community Climate System Model, in continued development since 1983.

eeZah7Ux(3129) 3 days ago [-]

This is way more than a thought experiment. Stable, mature software runs the world.

E.g. the CIP project aims to maintain released kernels for 25 years. https://www.cip-project.org/

jcelerier(3913) 3 days ago [-]

`ed`, of course

thrower123(3377) 3 days ago [-]

I would place money on people still using Vim and Emacs in a hundred years.

jonathanberger(3662) 3 days ago [-]

Unix. Email. TCP/IP. Just taking an initial stab.

smitty1e(10000) 3 days ago [-]

1. I've never upped my game enough to contribute to a project, much less put something on github. So, to the vast swath of people who are cooler than I: thank you.

2. TFA really gets at the classic issue of working vs. managing. Technical people tend toward being better at the former. Context switching between doing the work and managing the work is hard. Punting the management and diving into the work almost seems an escape.

3. The code itself makes this point to us. We can define an integer, and there it is. But when we need a list of integers? Look at how the management required just exploded.

4. Management sucks, but a management vacuum is a void, indeed. Let us be thankful for managers who don't suck.

DannyBee(3186) 3 days ago [-]

(I've been in the situation he describes, unfortunately)

I agree a lot of that article feels like two things (but i'm not going to remove all nuance - there are clearly other issues):

1. The issue of being forced to be a manager when you want to be an individual contributor, and even worse, feels like being forced to be a TLM when you want to be an individual contributor.

2. Even if you take away the aspects of #1, it feels like a lack of notion that individuals don't build software past a certain point. I mean that in the sense that, IMHO, past a certain point, all software becomes team built (or fails), whether you are the manager or not. High level software engineering is a team sport, not an individual one. That is true even outside of management - it's also about technically mentoring and growing the team that builds your software so you can rely on them instead of you (which is not just a manager task).

Every person you mentor into someone capable of doing the kind of work you want done is going to increase productivity on the software a lot more than you trying to do it yourself. Over time, they will also be able to build your team.

Over time and scale, if you don't have a team to rely on, and actually start relying on them, you will just feel crushed. Utterly utterly crushed.

There is no path out of it that involves getting better yourself. You simply cannot scale past a certain point and there is no way out of it.

aeorgnoieang(4074) 3 days ago [-]

'Just' read the source of something you like:


Or not – you're not obligated to contribute!

scarejunba(10000) 3 days ago [-]

> I've never upped my game enough to contribute to a project, much less put something on github.

Well, if you want to, just find a project you like and search their bug tracker. Usually there's a lot of shit that's trivial which no one could find time for.

You know, the stuff that people will be like 'OpenOffice's About Page hasn't been disableable for over fifteen years. This bug report is untouched!' Or whatever. You can fix it in five and be happy.

dbnotabb(3939) 3 days ago [-]

We're working to address some of the issues raised in this piece, by creating a new way of working on Open Source.

Companies post their issues on our platform and contributors are incentivised by getting to know a company, a monetary reward and the potential of raising their profile with the company to get hired.

We're looking for feedback so if anyone has any thoughts - https://works-hub.com/issues

Crinus(10000) 3 days ago [-]

Others have tried what you seem to be trying and didn't work, how are you going to convince people who decide to use tech because it is free, to pay for it and pay for more than insultingly low scraps? And once someone does accept and the work is done, what happens if they are not the main developer of the software that was worked on and the developers decide they do not want to accept the work? Where does that leave your customers who most likely (especially if their initial major motivation was free stuff) wont want to pay someone fix/update the patches every time a new version is released nor be willing to have their own fork?

I might be missing something of course since your site doesn't work. Every time i tried to click on a link nothing happened. Firefox's dev console had this warning:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://api.segment.io/v1/p.

Looks like your tracking is broken.

BTW, i'm kinda curious, how come you have '25,000+ software engineers on WorksHub' but only 8 issues, all of them from your own company?

Historical Discussions: Netflix Saves Kids from Up to 400 Hours of Commercials a Year (May 14, 2019: 649 points)

(649) Netflix Saves Kids from Up to 400 Hours of Commercials a Year

649 points 6 days ago by sharkweek in 894th position

localbabysitter.com | Estimated reading time – 3 minutes | comments | anchor

Even though kids these days are spending more and more times staring into their phones, television screen time is still a dominant source of entertainment for kids 2-11 years old.

But the way most children are consuming television these days has shifted. Many homes are now "Netflix-only" homes, while others remain more traditional with standard cable packages.

We calculated a series of numbers related to standard television homes, compared them to Netflix-only homes and found an interesting trend with regard to how many commercials a streaming-only household can save their children from having to watch:

  • Children 2-5 years old in Netflix-only homes are being saved from over 400 hours of commercials a year.
  • Children 6-11 years old in Netflix-only homes are being saved from over 360 hours of commercials a year.

A study conducted by the University of Michigan found that kids 2-5 spend an average of 32 hours per week in front of a television set. Children 6-11 spend 28 hours in front of a television set (there are some who believe the decrease between brackets is likely due to the slightly older kids now transitioning to phone time).

This means:

  • The average 2-5 year old is spending over 1,600 hours a year watching television.
  • The average 6-11 year old is spending over 1,450 hours a year watching television.

Ratings tracking company Nielsen reported in 2014 that the average hour of television has close to 15 minutes and 30 seconds of commercials. If calculated out over the above annual totals, both age brackets are saved from hundreds of hours of commercials every year.

Streaming services like Netflix, Hulu, and Amazon Prime, among others, have children's entertainment options that offer commercial-free consumption of both educational and entertaining television. Think of all the hours of wacky toys and sugary cereal young children are being spared.

Now of course it's worth discussing YouTube.

It is becoming fairly normal for kids as early as their toddler years to be watching YouTube for several hours a day. And while parents can get an ad-free experience through services like YouTube Red, most don't (and even with YouTube Red, kids can still watch content from advertisers like toy company YouTube channels or unboxing videos, which are essentially toy ads as well).

While we couldn't find any specific data on whether or not a child in a home with standard television versus Netflix-only options watches more YouTube, we're willing to bet the numbers are probably fairly equal. Kids love YouTube, and advertisers are throwing billions of dollars every year at the service to get their products in front of young audiences.

We're also curious, as children of the 90s ourselves, does a child who grew up without a continual bombardment of ads end up more susceptible to advertising later in life? Did those early days of being shown ad after ad for LIFE Cereal end up turning us all into advertising cynics? No data yet on this, but something we'll definitely be paying close attention to in the future.

All Comments: [-] | anchor

fuzzbuzz(10000) 5 days ago [-]

Take this with a pinch of salt. BarbieTM, LegoTM, Paw PatrolTM, Shop KidsTM, and what not, have their own shows. Instead of watching a serie, with breaks of commercials, the kids watch 30 min commercials with some storie baked in it.

And then there is product placement in everything else.

Dont know if its better. It is deffinitly more 'sneaky'.

jdashg(10000) 5 days ago [-]

This has been the case for decades. These days Transformers just has more lens flare.

luckydata(4121) 6 days ago [-]

It's true but I don't like how I can't filter content for my son. He's 4yo and is a little prone to act in real life like the characters in the cartoons do and I would like to at least temporarily filter things that have fighting in it (like power rangers) because he tends to fight his classmates (they like it, the teachers give us endless grief about it).

No way to do that and it seems like a pretty basic form of control that should be table stakes for a kids oriented service.

silversconfused(10000) 6 days ago [-]

We set up a PC with two mirrored video outputs for netflix. One goes to the TV in the living room, and another is VGA over CAT5 to a smaller monitor in the kitchen. Makes monitoring consumption easy.

dqpb(3960) 6 days ago [-]

Keep him away from YouTube then. They have an outbreak of 'kids shows' designed to corrupt and traumatize the minds of children.

nhumrich(3992) 6 days ago [-]

Yes! This all day long. I want either a whitelist or blacklist for shows on netflix children accounts. Amazon prime kidtime lets you do this for their tablets (even though the feature is very hidden), and its awesome.

duxup(3882) 6 days ago [-]

Even if just for my own sanity ... I like the lack of commercials and the resulting stable levels of volume.

My kids don't watch a lot of TV / screen time but when they do I want to be able to monitor it, and yet also maintain my own sanity.

moate(4105) 6 days ago [-]

Oh my god this! Why is is so hard to balance the audio between commercials and shows?

IloveHN84(10000) 5 days ago [-]

For all the people living frugally here: You can also use alternative YouTube clients (e.g. NewPipe) to skip all the commercials while saving money and doing it legally.

Nice #ad for Netflix, I would never spend a cent on it knowing that they do geoblocking and limiting shows to specific countries because of agreements with other platforms, such as Sky. Instead, I would go for some third party Kodi Plugin and skipping the need of having 2-3 accounts to watch all the shows I want, freely.

Y_Y(3749) 5 days ago [-]

I've found SkyTube and MusicPiped (a cousin of Newpipe just for music) to both excellent YouTube clients, and both are naturally available on F-Droid.

uBlock Origin with all the extra filters makes youtube.com tolerable on desktop.

SpaceInvader(1769) 6 days ago [-]

Year and a half ago[0] it was 230 hours. Are there more commercials in tv nowadays?

[0] https://news.ycombinator.com/item?id=15990559

jerf(3287) 6 days ago [-]

My wife and I buy a couple of relatively popular TV series on Amazon to watch them without commercials. I've been watching the minute count on a 'half-hour' TV series creep down over the years... 23 minutes... 22 minutes... current 'best' record I've personally witnessed is 18 minutes, but stay tuned. If someone replied with a 16 or a 17 example I wouldn't be shocked. I'm wondering if the TV companies will experience some modicum of shame around the 15 minute mark.

davidy123(4129) 6 days ago [-]

This is a step in the right direction. Still, Netflix &c force a lot of content of their own choosing through suggestions and highlighted content, using ever invasive techniques, like auto play, to force that content on users. A service that only ever showed content that someone specifically requested would be an improvement.

jeffrallen(10000) 5 days ago [-]

To turn off auto-play, login on a computer, not Apple TV. Go to Accounts, Profile, Playback settings. For kids accounts it is different. Get into the help pages, and from there the kids profile's playback options ate available. Very confusing, only found it by chance while chatting with the support people.

nickreese(3819) 6 days ago [-]

What a terribly thin affiliate site? How'd this get front-paged? Why aren't we linking to the study instead of the summary of the study?


dang(172) 6 days ago [-]

The article is a thin layer, I agree, but it adds the bit that people are mostly responding to.

makecheck(3890) 6 days ago [-]

People applaud the lack of commercials because of the advertising industry's self-inflicted wound of making obnoxious ads.

Ads didn't have to try so hard to be WAY LOUDER THAN THE SHOW. They didn't have to be trying to sell obviously-terrible things ("Get StupidDrugName! <happy music> Side effects may include: death"). They didn't have to subject you to 12 identical copies of the same commercial in one broadcast. They didn't have to greedily demand an increasing share, to the point of significantly reducing the length of a show. And that's just TV...web ads have lots of avoidable crap too.

They got greedy, they made ads suck, now people have no use for them.

r00fus(4131) 5 days ago [-]

> They got greedy, they made ads suck, now people have no use for them.

Many figured this was the logical conclusion, absent regulation.

Of course, the industry fought the concept of regulation or even self-regulation tooth and nail.

So here we are.

DanFeldman(4130) 6 days ago [-]

Good riddance. The last time I watched TV with ads was for the superbowl, and even those nicely produced ads I found absolutely grating. There's no way I can go back to ads between tense moments in TV shows. It's like seeing the internet with an adblocker for the first time.

mrec(10000) 6 days ago [-]

Similar thing here - the only time I now encounter video-ads-making-noise is at the cinema before a film, and it's becoming unpleasant enough there that I've largely stopped going to the cinema.

It's not (or not just) that they're getting worse, it's that once the familiarity wears off you can't help but see how psychologically damaging and hostile they are.

blobbers(10000) 6 days ago [-]

The average 2-5 year old is spending over 1,600 hours a year watching television.

This seems nuts to me. That is 4.5 hours a day! For a 2 year old? Who are these parents...

moate(4105) 6 days ago [-]

Average parents?

safgasCVS(10000) 6 days ago [-]

Hold up. Are we saying the average kid is watching over an hour of commercials per day. If that stat is true (which would be around 4 hours of tv per day give or take) the solution is not Netflix but pumping the brakes on tv. That's an insane amount of tv

asdff(10000) 5 days ago [-]

TV has been overconsumed since the 80s at least. When my parents grew up they only had a dozen or so stations, so if their show wasn't airing there wasn't a point to even turn on the TV.

blobbers(10000) 6 days ago [-]

'The average 2-5 year old is spending over 1,600 hours a year watching television.'

This seems nuts to me. That is 4.5 hours a day! For a 2 year old? Who are these parents...

aczerepinski(10000) 6 days ago [-]

Childcare is super expensive. I think there are a ton of people who can't afford it and use screen time to supplement. It keeps the kid safe in one spot while you can get stuff done. Keep in mind that letting a child play outside by themselves is essentially illegal.

colordrops(10000) 6 days ago [-]

Yeah that was my first thought. Maybe kids are being saved from commercials but not from the screen.

vkou(10000) 6 days ago [-]

> Who are these parents

They are two people, each of whom works at a job (Which is necessary for most families), who don't live with their retired parents (Who would watch, and play with grandkids for free), and can't afford a nanny. (Or, as rich engineers call them, an aupair.)

In short - normal 21st century people.

(And if they are single parents, this equation becomes even more screwed up. The problem with being a single parent, is that you have to live with whatever life decisions lead you up to that point, for the next 18 years.)

chapium(10000) 6 days ago [-]

Yeah, I'm going to call bs on this one. Perhaps they are including kids who have to have an ipad in front of them at every meal and every walk in their stroller?

throwaway55554(10000) 6 days ago [-]

I wanna know what 2 year old can stare at a screen for 4.5 hours. All the 2's we've had have been all over the place and won't sit in front of a tv.

Not that I'm complaining... Honest, I'm not.

> Who are these parents...

Remember 'It takes a village'? Well, in all honesty, it does. And if you don't have a support system in place, you'll have a hard go of it.

makk(10000) 6 days ago [-]

Or, turning off the screens saves kids from 100% of commercials a year. Something about articles like this make me want to vomit.

kikoreis(10000) 6 days ago [-]

You beat me to it. A lot of the comments here underscore how much better it is (which I don't doubt) without seeming to acknowledge that ads are very likely to come -- in fact, didn't they test already last year? Yeah, https://thenextweb.com/contributors/2018/10/20/netflix-tests...

blobbers(10000) 6 days ago [-]

Great root cause analysis. Wish more people would think like this.

QuackingJimbo(10000) 6 days ago [-]

400 hours of commercial-watching is not being 'saved'

If Netflix and other streaming services didn't exist, kids would not be watching 1600 hours of terrestrial TV with ads

jandrese(4130) 6 days ago [-]

Citation needed.

There are plenty of studies showing how kids watch way too much TV or equivalents.


kailuowang(3796) 6 days ago [-]

400 hrs of commercials a year? How much TV are these kids watching a day?

michaelt(3955) 6 days ago [-]

According to [1] the average US adult watches 4 hours 10 minutes of live TV per day. Personally that comes as a surprise to me - but I would expect Nielsen to know a thing or two about surveying TV viewership.

[1] https://www.nielsen.com/us/en/insights/news/2018/time-flies-...

mobjack(10000) 5 days ago [-]

They probably aren't literally watching the TV the whole time.

Some parents have the TV on in the background but the children are distracted with other tasks.

For example, I will watch the news while my daughter plays in the same room doing her own thing.

gwtaylor(10000) 6 days ago [-]

The article claims nearly 4.5 hours per day. I find that hard to believe... I don't know any children (in my admittedly biased sample) that watch more than an hour or two.

meddlepal(10000) 6 days ago [-]

Sub-headline: Kids waste those 400 hours playing Fortnite.

kxter(4126) 5 days ago [-]

or Kingdom Hearts, or Mortal Combat... ;)

legitster(4126) 6 days ago [-]

In a related note, I have come to realize the different levels of quality in children's television:

- Public television shows (PBS) stuff is so lovingly created and charming. You can really tell they had experts and creatives work side by side to craft positive stories. Even if it's not particularly exciting.

- Super-smart and creative shows (like the first couple seasons of Spongebob, or Gravity Falls). Great shows on their own merit, super funny and creative. But they are an art first and not necessarily focused on development. Good at clueing kids in on structuring jokes or references, not much else.

- Bland and harmless. Shows that drive a story or characters, but not necessarily lovingly made or particularly funny. I have found most of the Netflix/Amazon/Hulu stuff falls in here.

- Colorful garbage. It's a lights and noise show, with huge focus on licensing. Cartoons of my era are particularly susceptible to this (Dragonball Z, Yu Gi Oh, Transformers, etc).

chrisweekly(3902) 6 days ago [-]

Gravity Falls -- yeah! I can't stand Spongebob, but are there other shows like GF?

mehrdadn(3525) 6 days ago [-]

> - Public television shows (PBS) stuff is so lovingly created and charming. You can really tell they had experts and creatives work side by side to craft positive stories. Even if it's not particularly exciting.

> - Colorful garbage. It's a lights and noise show, with huge focus on licensing. Cartoons of my era are particularly susceptible to this (Dragonball Z, Yu Gi Oh, Transformers, etc).

What PBS shows are you imagining? Clifford? Sesame Street? I never thought they were targeting the same demographics as the 'colorful garbage' like Yu-Gi-Oh (early elementary vs. late elementary/middle school). Also a lot of other popular shows (like Spongebob) seemed more like colorful garbage to me than things like Yu-Gi-Oh that you lumped in there.

And what about classics like Tom & Jerry, Roadrunner, etc.? There wasn't much particularly educational or colorful or really anything about them but man now I'm tempted to go watch them again.

zanny(10000) 5 days ago [-]

YuGiOh and Transformers were largely toy commercials, but you lump Dragonball Z into the same class? It practically indoctrinated half a generation into fitness and physical self improvement, I still sometimes hear Bruce Falconers score at the gym.

I feel there is a substantial divide between a narrative adventure story by a Japanese manga artist crafted as a parody retelling of Journey to the West and followed by his eccentricities (Frieza was an alien and power ranger parody, Cell was a bishonen parody, there was tons of messaging in the core narrative about passing the torch, etc) and a product like Transformers designed by a corporate committee of Hasbro from day one meant to sell toys. Sure, the further into Dragonball you got the more commoditized and derivative it became, but at least the original Dragonball show is a worthy classic.

There absolutely is a class below all those shows, where there is absolutely no narrative development and every episode is designed and manufactured for self contained entertainment with no greater depth. A lot of Hanna Barbara cartoons fall into that class, Scooby Doo absolutely - where every episode is designed to be disposable and ultimately meaningless, just meant to distract.

throwaway66666(10000) 6 days ago [-]

Netflix puts the ads directly inside the shows they control themselves. This way no adblock can censor them, and Netflix can prance around pretending they are saints for not showing ads.

Example 'Hey let's call an uber' says character A.

'I watched X on netflix yesterday' says character B.

tyingq(4088) 6 days ago [-]

Good point, but subtle product placement is a TON better than explicitly pushing sugary cereal as a 'part of a wholesome breakfast' for 2 minutes straight.

bargl(4074) 6 days ago [-]

This is ALL over so many shows on cable TV. It hit me one day when watching bones. Two characters had a conversation about the features of their 'awesome' car.

I also recently saw this on 'A million little things' there was an absolutely useless shot of a main character opening the trunk of the car with a foot wave. It was so over done and out of place.

ihuman(2718) 6 days ago [-]

'Is that a PS Vita? I have a console at home; I play sometimes to relax. I oughta get one of these for the car' - House of Cards

hef19898(3225) 6 days ago [-]

Walulis, a German Youtube channel (imagine a German John Oliver for social media with a pretty solid Hitler joke per video ratio and high quality content) had a video about this. Apparently Bill Gates is a majority stakeholder in one of the companies make billions matching companies with productions. Sometimes years before the production takes place. Another example is Stranger Things and some cereal brand.

mbrameld(10000) 6 days ago [-]

Isn't that just realistic, though? That's how dialogue in real life sounds, too. I and most people I know use 'uber' as a generic term for a ride share service. We also distinguish which streaming service the show we're talking about is on if it's not already known by everybody in the conversation.

Synaesthesia(4037) 6 days ago [-]

There's also just the ideological content of the shows.

Symbiote(4106) 6 days ago [-]

It's called product placement [1].

There are rules restricting it in the UK, which must complicate broadcasting some American TV shows.

There are also companies using software to replace the branded product according to different markets — so they can rebrand that cereal box for the rerun if necessary.

[1] https://en.wikipedia.org/wiki/Product_placement

scarface74(3811) 5 days ago [-]

I would much rather a TV show say "Let's call an Uber" than "Let's call a peer to peer ride sharing service".

Just like everyone says "I am going to Google that" not "I'm going to search for that". One sounds stilted. The other sounds natural.

robbiemitchell(3363) 6 days ago [-]

I'm guessing you don't have kids and haven't actually experienced the difference.

- Netflix has gobs of kids shows (i.e., cartoons) that have zero product placement, at least as far as I can tell. This includes both Netflix originals and the rest.

- TV has 6-minute ad breaks every few minutes that are non-stop pitches for toys and other shows. It's unbearable.

Even shows on premium networks with some product placement and mentions are 1000x better than regular cable. There's simply no comparison.

aqme28(4018) 6 days ago [-]

That's product placement and it's everywhere. Netflix might still be saving kids from 400 hours of commercials even if they have some commercials themselves.

pavel_lishin(221) 6 days ago [-]

You choose the shows your kids watch; you don't get to choose what commercials a cable/airwave channel shows you.

willio58(10000) 6 days ago [-]

Regular tv does this equally if not more.

danielfoster(3903) 6 days ago [-]

This is great but if your child is watching so much Netflix that 400 hours worth of commercials are avoided, maybe your child is watching too much? Go play with Legos or do something interactive...

nickelcitymario(10000) 6 days ago [-]

Well, are we assuming they're spending those 4 hours watching TV passively? Maybe they are. Or maybe they just like to have it on while they do other things.

I know my own kids tend to have Netflix on all the time, but are also playing games, making art, talking to each other, doing chores, etc.

I myself have it on for many hours a night, but I spend almost no time sitting in front of the screen. (Sundays have been an exception for GoT.) It just makes doing dishes, cleaning, and laundry so much more bearable.

That's what these studies always seem to fail to address. The assumption is if the TV is on, you must be staring at it. And that's simply not the case.

crazygringo(3750) 6 days ago [-]

Given how educational TV can be in teaching language skills, interpersonal skills, and basic facts, I don't think this is necessarily too much at all, if it's 2-3 hrs/day.

When I was a young child before I could read books, I learned tons about the world from watching Sesame Street, Mr. Rogers, and similar high-quality shows. I still spent plenty of time with my Legos, but my childhood would definitely have been worse without the programming on PBS.

diminoten(10000) 5 days ago [-]

I dunno, I'm not convinced TV time is as detrimental to children as we're being told. It sounds a lot like the 'kids these days' trope.

dpark(10000) 6 days ago [-]

> The average 2-5 year old is spending over 1,600 hours a year watching television. > The average 6-11 year old is spending over 1,450 hours a year watching television.

Jesus. 4 hours of TV every day. That's nuts.

SmellyGeekBoy(4125) 5 days ago [-]

> Go play with Legos or do something interactive...

Maybe they're watching Bandersnatch? ;)

burntoutcase(10000) 6 days ago [-]

While I'm glad that kids in Netflix-only households aren't getting bombarded with advertisements targeted at children the way kids my age were in the 1980s, I can't help but suspect that these ad-free kids need to be exposed to some advertising in an educational context so that they learn to recognize when somebody is trying to con them into buying shit they don't need and probably didn't want in the first place.

balls187(4010) 6 days ago [-]

I grew up on a military base in a foreign country, and all the television was from the Armed Forces network. There were no commercials, just random PSA.

Occasionally teachers would get VHS tapes with recorded TV from the US, and we were more enthralled with the commercials than the programs themselves.

newsoul2019(10000) 6 days ago [-]

Indeed. Whenever a commercial is on, I like to ask the kids 'What is this ad trying to sell you?'

nordsieck(10000) 6 days ago [-]

> I can't help but suspect that these ad-free kids need to be exposed to some advertising in an educational context so that they learn to recognize when somebody is trying to con them into buying shit they don't need and probably didn't want in the first place.

Do you mean something like, the Internet?

Advertising is so pervasive, you'd probably have to do something drastic like Amish style technology banning to avoid it; even that may not be enough.

As a side note, even though Netflix doesn't have separate advertising, I'm sure there's at least product placement and other forms of advertising in their content.

michaelt(3955) 6 days ago [-]

Then you'll be pleased to learn adverts are also available online :)

bluetidepro(3324) 6 days ago [-]

> I can't help but suspect that these ad-free kids need to be exposed to some advertising in an educational context so that they learn to recognize when somebody is trying to con them into buying shit they don't need and probably didn't want in the first place.

It's not like they spend 24/7 on Netflix, they are probably getting bombed with ads more so than ever in history out in the real world. And even more so, probably even more aggressively than the commercials they are missing by using Netflix.

bzbarsky(1769) 5 days ago [-]

For what it's worth, I distinctly remember that sometime in middle school we had a unit where exactly that happened: we were shown some ads, and there was various discussion about the way ads are created, the tricks they use, etc, etc. The 'elmer's glue for milk in cereal ads' bit definitely stuck with me. ;)

awinder(4131) 6 days ago [-]

They learn / don't learn that skill from kidfluencers on YouTube. From what I've seen with my daughter that's a super-weaponized form of advertising compared to what I grew up in, anyone who survives Ryan and turns out halfway adjusted will have made it through a real crucible.

not_a_cop75(10000) 6 days ago [-]

Honestly, if you haven't seen the advertising now rampant in the new Netflix interface, then you haven't been paying attention. Everyday I'm bombarded with 'ads' on Netflix for shows that my family wants no part of. Things that seem either offense at worst, or at best misguided as 'positive' offerings are frequently shown.

bargl(4074) 6 days ago [-]

Worth. Every. Penny.

I went to my parents house with my son. He was 4. They turned on cartoons. Then he said, 'Dad why'd you change the show? I don't want to watch this.' It was a commercial. I had to explain commercials to my son. It was at that moment I realized how much of my tv watching as a kid was commercials.

mcphage(4130) 6 days ago [-]

I had that same experience with my daughter—it was Thanksgiving, so we put on the Macy's Day Parade 'cause hey, giant balloons. And then a commercial came on and she asked me to put the balloons back.

sitkack(4056) 5 days ago [-]

I came to say something similar. My child gets irate when she sees an ad, esp when she plays a free to play game on the iphone and I forget to put the phone in airplane mode.

her: 'Dad, what is this? I want to play my game'

me: 'Oh sorry, that is an ad, let me fix this.'

her: 'I hate ads!'

She is basically the same with trailers.

Angostura(3569) 5 days ago [-]

I remember my kids similar confusion/annoyance when they first strayed from the BBC children's channels - CBeebies and CBBC

mysterydip(10000) 6 days ago [-]

Same thing with mine. Not being able to choose an episode or show, and being stuck with whatever the channel wanted to provide, was a foreign concept.

anoncow(4113) 6 days ago [-]

After subscribing to YouTube Premium, I had a similar revelation. Now if only we could have a google ad-free subscription which would turn off all google search and display network ads, our lives would be so much better! What if we could have cities outlaw advertising on billboards and instead collect a small tax to make up for the lost revenue, how much would our quality of life increase? That makes me wonder, if we could ever get to such an utopia, what avenues would advertisers have left to sell products to us?

83457(4079) 6 days ago [-]

Did you tell this story years ago on hn? Someone did and I've mentioned it to a few people since. (although I guess it is a common situation)

Fnoord(3874) 5 days ago [-]

One of the reasons I got Netflix, and one of the reasons why I want to get rid of cable TV (I can't get internet without cable TV). However I am getting tired of the autoplay functions in Netflix. They. Are. Annoying.

Commercials for children are a cancer to society. They're a distraction and a waste of our precious time. Parents don't want it (they got enough on their hands as it is), children don't want it (it isn't content, they get manipulated).

While I will protect my children (and myself) from any commercials, its also good to teach them that not everything they notice is truth or can be achieved/bought.

legohead(4122) 6 days ago [-]

Brand marketing towards children is a real and effective thing. There was a study done on wrapping basic food (carrots/apples) in McDonalds branding and kids said it tasted better [1]. Always seemed scary to me.

Glad my kids don't have the same amount of brainwashing as I did. However, we have our own generational problems to deal with, like YouTube 'merch' begging, clickbaiting, etc. Now that I think about it, maybe I'd rather they'd see TV adverts...

[1] https://www.cbc.ca/news/kids-think-food-in-mcdonald-s-wrappe...

m463(10000) 5 days ago [-]

I think a lot of 'shared experience' stuff in the network tv era included the ads. I still remember advertising songs and things like the big mac ingredients.

not_a_cop75(10000) 5 days ago [-]

Until it's not and they advertise anyway. Netflix has adopted the pay and advertise anyway model. The advertising in this case is the product you've already paid for, which is still annoying and bothersome. Apparently, they are trying to work off the success of AOL.

Circuits(10000) 6 days ago [-]

Funny that they say it's saving kids all this time.. as if adults aren't the main consumers of Netflix media.

amelius(883) 6 days ago [-]

On the flip side, commercials can teach a kid to be patient, and that it can not always have instant/continuous gratification.

sdegutis(3362) 6 days ago [-]

Why not go further and get rid of TV? The problems it creates seem to greatly outweigh any little benefits. And most of the benefits can be gotten with books anyway, which also has more benefits.

lmkg(4087) 6 days ago [-]

To be fair, my favorite show growing up was Transformers. So technically even the show itself was a commercial.

turk73(10000) 6 days ago [-]

I have had the same experience.

Also, some relative gave us some kids' VHS tapes and I still had a player so I figured why not? Well, let me tell you why not: Nobody has time for rewinding to take place and the low-quality video and audio is quaint, but it's dead tech. The VCR and all the tapes got recycled shortly thereafter.

anpmat(10000) 5 days ago [-]

Couldn't agree more, and when a lot of kids are doing that you can imagine how much of an effect it has on peer pressure and what is considered a cool toy to have.

mcv(4122) 5 days ago [-]

When I was a kid, our evening TV was always Sesame Street -> Klokhuis (a pop-sci-ish program for kids) -> Jeugdjournaal (news presented in an accessible way for kids), and after that it's my dad's turn to watch the real news at 8.

All those shows still exist, and recently we did that again, and we were all baffled by how much commercials we had to watch. Dutch public TV has commercials, and that's looking increasingly odd now that we're all so used to Netflix. My son was calling to skip it, but we couldn't.

brixon(10000) 6 days ago [-]

Granted, my son now wants toys based on this 'new' show he was watching and is sad to find out that toys for this are no longer made and the only ones that exist are collectibles.

Raphmedia(10000) 6 days ago [-]

In Québec, the Consumer Protection Act prohibits commercial advertising directed at children under 13 years of age. One negative effect is that this has made it very hard for genuine educational companies to grow. It is hard to market a product that teach kids to code if you cannot show the product to kids.

saluki(4130) 6 days ago [-]

Same my son grew up on Netflix, we were watching the Super Bowl on OTA tv when he was 5 and he asked the same thing, what's this? Oh that's a commercial.

He also picked up that when we search a toy on amazon, afterward there would be an ad on the side of YouTube for the same toy, he pointed and asked how they did that. This was around the same time, he was 5 or 6.

pastor_elm(10000) 6 days ago [-]

were the ads really all that bad though? I don't remember asking for much more than video games, board games, hungry hungry hippos, McDonalds, Sunny D, etc.

Kids today get crazy ads on Instagram and are asking for Supreme shirts, Yeezus shoes, Kylie Jenner makeup kits.

Historical Discussions: Virtual DOM is pure overhead (2018) (May 18, 2019: 643 points)

(646) Virtual DOM is pure overhead (2018)

646 points 1 day ago by nailer in 502nd position

svelte.dev | Estimated reading time – 9 minutes | comments | anchor

Virtual DOM is pure overhead

Let's retire the 'virtual DOM is fast' myth once and for all

Rich Harris Thu Dec 27 2018

If you've used JavaScript frameworks in the last few years, you've probably heard the phrase 'the virtual DOM is fast', often said to mean that it's faster than the real DOM. It's a surprisingly resilient meme — for example people have asked how Svelte can be fast when it doesn't use a virtual DOM.

It's time to take a closer look.

What is the virtual DOM?

In many frameworks, you build an app by creating render() functions, like this simple React component:

function HelloMessage(props) {
  return (
    <div className='greeting'>
      Hello {props.name}

You can do the same thing without JSX...

function HelloMessage(props) {
  return React.createElement(
    { className: 'greeting' },
    'Hello ',

...but the result is the same — an object representing how the page should now look. That object is the virtual DOM. Every time your app's state updates (for example when the name prop changes), you create a new one. The framework's job is to reconcile the new one against the old one, to figure out what changes are necessary and apply them to the real DOM.

How did the meme start?

Misunderstood claims about virtual DOM performance date back to the launch of React. In Rethinking Best Practices, a seminal 2013 talk by former React core team member Pete Hunt, we learned the following:

This is actually extremely fast, primarily because most DOM operations tend to be slow. There's been a lot of performance work on the DOM, but most DOM operations tend to drop frames.

Screenshot from Rethinking Best Practices at JSConfEU 2013

But hang on a minute! The virtual DOM operations are in addition to the eventual operations on the real DOM. The only way it could be faster is if we were comparing it to a less efficient framework (there were plenty to go around back in 2013!), or arguing against a straw man — that the alternative is to do something no-one actually does:

onEveryStateChange(() => {
  document.body.innerHTML = renderMyApp();

Pete clarifies soon after...

React is not magic. Just like you can drop into assembler with C and beat the C compiler, you can drop into raw DOM operations and DOM API calls and beat React if you wanted to. However, using C or Java or JavaScript is an order of magnitude performance improvement because you don't have to worry...about the specifics of the platform. With React you can build applications without even thinking about performance and the default state is fast.

...but that's not the part that stuck.

So... is the virtual DOM slow?

Not exactly. It's more like 'the virtual DOM is usually fast enough', but with certain caveats.

The original promise of React was that you could re-render your entire app on every single state change without worrying about performance. In practice, I don't think that's turned out to be accurate. If it was, there'd be no need for optimisations like shouldComponentUpdate (which is a way of telling React when it can safely skip a component).

Even with shouldComponentUpdate, updating your entire app's virtual DOM in one go is a lot of work. A while back, the React team introduced something called React Fiber which allows the update to be broken into smaller chunks. This means (among other things) that updates don't block the main thread for long periods of time, though it doesn't reduce the total amount of work or the time an update takes.

Where does the overhead come from?

Most obviously, diffing isn't free. You can't apply changes to the real DOM without first comparing the new virtual DOM with the previous snapshot. To take the earlier HelloMessage example, suppose the name prop changed from 'world' to 'everybody'.

  1. Both snapshots contain a single element. In both cases it's a <div>, which means we can keep the same DOM node
  2. We enumerate all the attributes on the old <div> and the new one to see if any need to be changed, added or removed. In both cases we have a single attribute — a className with a value of 'greeting'
  3. Descending into the element, we see that the text has changed, so we'll need to update the real DOM

Of these three steps, only the third has value in this case, since — as is the case in the vast majority of updates — the basic structure of the app is unchanged. It would be much more efficient if we could skip straight to step 3:

if (changed.name) {
  text.data = name;

(This is almost exactly the update code that Svelte generates. Unlike traditional UI frameworks, Svelte is a compiler that knows at build time how things could change in your app, rather than waiting to do the work at run time.)

It's not just the diffing though

The diffing algorithms used by React and other virtual DOM frameworks are fast. Arguably, the greater overhead is in the components themselves. You wouldn't write code like this...

function StrawManComponent(props) {
  const value = expensivelyCalculateValue(props.foo);
  return (
    <p>the value is {value}</p>

...because you'd be carelessly recalculating value on every update, regardless of whether props.foo had changed. But it's extremely common to do unnecessary computation and allocation in ways that seem much more benign:

function MoreRealisticComponent(props) {
  const [selected, setSelected] = useState(null);
  return (
      <p>Selected {selected ? selected.name : 'nothing'}</p>
        {props.items.map(item =>
            <button onClick={() => setSelected(item)}>

Here, we're generating a new array of virtual <li> elements — each with their own inline event handler — on every state change, regardless of whether props.items has changed. Unless you're unhealthily obsessed with performance, you're not going to optimise that. There's no point. It's plenty fast enough. But you know what would be even faster? Not doing that.

React Hooks doubles down on defaulting to doing unnecessary work, with predictable results.

The danger of defaulting to doing unnecessary work, even if that work is trivial, is that your app will eventually succumb to 'death by a thousand cuts' with no clear bottleneck to aim at once it's time to optimise.

Svelte is explicitly designed to prevent you from ending up in that situation.

Why do frameworks use the virtual DOM then?

It's important to understand that virtual DOM isn't a feature. It's a means to an end, the end being declarative, state-driven UI development. Virtual DOM is valuable because it allows you to build apps without thinking about state transitions, with performance that is generally good enough. That means less buggy code, and more time spent on creative tasks instead of tedious ones.

But it turns out that we can achieve a similar programming model without using virtual DOM — and that's where Svelte comes in.

All Comments: [-] | anchor

reilly3000(4111) 1 day ago [-]

Has anybody here migrated a UI from React to Svelte? How did it go?

ru999gol(10000) about 19 hours ago [-]

of course not, this is just opinionated rubbish, in the real world people use react dom/native

guscost(3684) 1 day ago [-]

From the conclusion:

> Virtual DOM is valuable because it allows you to build apps without thinking about state transitions, with performance that is generally good enough.

In other words, Virtual DOM is somewhat-valuable overhead. This is a cool alternative, seemingly sort of a compile-time version of Knockout. It's probably worth a try for writing an efficient client app, but I have a hunch that I'd miss the 'HTML-in-JS(X)' pattern if I went back to using 'JS(?)-in-HTML' instead. A VDOM runtime allows you to write plain JS that 'just works', at least until certain parts need to run faster. This means junior programmers can pick it up and become productive quickly, and avoid driving their projects off a metaphorical cliff.

Of course this is bought with bandwidth and CPU overhead, lots of it in some cases. The call you should make when considering a VDOM is whether the safety and familiarity benefits are worth the overhead. If your team is experienced enough to take on a new DSL for rendering markup (which every template-binding tool really is) and meticulous enough to assign instead of mutate and avoid two-way binding pitfalls, go for it. If not, be careful.

This is not meant as a challenge. Personally I wouldn't want to work on a big application that is wholesale optimized in this way, unless there was no alternative. I wouldn't write my own game engine (if it was for a job) either.

floatboth(10000) about 22 hours ago [-]

You can easily do HTML-in-JS 'react-like' rendering without diffing!

lit does just that https://lit-element.polymer-project.org

Lx1oG-AWb6h_ZG0(4081) 1 day ago [-]

Dan Abramov has a great thread about this here: https://mobile.twitter.com/dan_abramov/status/11209717954258.... In particular, I find this argument really persuasive:

> Time slicing keeps React responsive while it runs your code. Your code isn't just DOM updates or "diffing". It's any JS logic you do in your components! Sometimes you gotta calculate things. No framework can magically speed up arbitrary code.

In my experience, as your app grows, the amount of time you spend on dom reconciliation becomes negligible compared to your own business logic. In this case, having a framework like React (especially with concurrent mode) will really help improve perceived user experience over a naive compiled implementation.

tylerhou(3868) 1 day ago [-]

> In my experience, as your app grows, the amount of time you spend on dom reconciliation becomes negligible compared to your own business logic. In this case, having a framework like React (especially with concurrent mode) will really help improve perceived user experience over a naive compiled implementation.

In my experience, the exact opposite occurs. If there is ever any heavy computation I need to do, I usually try spawn a web worker or offload it to the server. In contrast, as your app tree grows reconciliation costs grow (super?)linearly, and more importantly there is (currently) no way to offload reconciliation.

pan_4321(10000) 1 day ago [-]

That's actually what I didn't understand in Dan Abramov's response ...

If you run a synchronous function that takes 2 seconds your app will block whether you use Svelte or React or whatever. You need to offload it to a webworker anyway.

Still I think it's a good idea to update the DOM not more often than needed: Only update the last change of an element every 23 ms and skip the changes that have been overriden. You can do this without a virt DOM.

dsissitka(10000) 1 day ago [-]

Rich recently addressed this:


rayiner(2833) 1 day ago [-]

Only ever having used MFC and Swing, this seems odd to me. A diff of the entire DOM on every state change? You never see anything like that in native toolkits. ELI5: What problem is that solving?

tptacek(76) about 16 hours ago [-]

You write your UI declaratively, in almost straight-line code, and never 'update' anything in the UI itself. It's significantly easier to write.

adamnemecek(16) 1 day ago [-]

Makes programming somewhat easier and also batches layouts.

Tehdasi(10000) about 20 hours ago [-]

The problem it's solving is the DOM being unsuited to writing the kind of applications that MFC and Swing are used for writing. (which, for the web, is a SPA)

namelosw(10000) 1 day ago [-]

I thought this was well known years ago. A better description for VDOM should be 'It's not fast, and is not slow either'.

But frankly, what I see in virtual DOM is not about speed. It's a declarative interface, an abstraction. It's more like a blueprint that's easier to interpret across different environments like React Native, WebGL. Even if you don't need any of these cross-platform benefits it's still good for testing -- without real DOM.

As for performance, it could be an aspect of advertising but I doubt it really matters anymore.

I saw many applications where AngularJS is too slow, and I even worked on one for quite a while -- it's just a fairly typical 'enterprise application'. But I still yet to see a real-world front-end project where React is too slow.

Users won't even care about if it is 10ms or 30ms.

Bahamut(3728) about 20 hours ago [-]

I work on one with React that is too slow in the browser with a team that only has senior devs, and users even filed bugs about the performance - we do heavy computations, and React's model of blocking rendering on having everything updated can freeze our UI for up to 10s while data comes in from various API requests. I believe our app would be performing much better for the end user if we were using Angular 2+ interestingly enough due to its built in incremental updating - there would be other tradeoffs though.

Part of the problem is not having good enough APIs currently (we have to make too many API requests and data payloads are too fat, sometimes up to 2 MB per request), but imperfect APIs tend to be the case in a lot of apps early in their lifecycle. I've actually been a bit disappointed in React's performance from a UX perspective.

Rusky(2825) 1 day ago [-]

The point of the article is that you can get that same declarative interface at much lower cost, with a more efficient implementation.

oraphalous(10000) 1 day ago [-]

I think this article - and many of the comments on this thread are forgetting the context of how DOM manipulation was typically done when the virtual DOM approach was introduced.

Here's the gist of how folks would often update an element. You'd subscribe to events on the root element of your component. And if your component is of any complexity at all - first thing you'd probably do is ask jQuery to go find any child elements that need updating - inspecting the DOM in various ways so as to determine the component's current state.

If your component needed to affect components higher up, or sibling to the current instance - then your application is often doing a search of the DOM to find the nodes.. and yes if you architect things well then you could avoid a lot of these - but let's face it, front end developers weren't typically renown for their application architecture skills.

In short - the DOM was often used to store state. And this just isn't a very efficient approach.

This is what I understood the claim that VDOMs are faster than the real DOM meant - and the article is pretty much eliding this detail.

As far as I'm aware React and its VDOM approach was the framework that deserves the credit for changing the culture of how we thought about state management on the frontend. That newer frameworks have been able to build upon this core insight - in ways that are even more efficient than the VDOM approach is great - but they should pay homage to that original insight and change in perspective React made possible.

I feel this article and many of the comments here so far - fail to do that - and worse, seem to be trying to present React's claim of the VDOM faster than the DOM as some kind of toddler mistake.

EB66(10000) 1 day ago [-]

What I think this article does very well is rebut the myth that the DOM is slow. The DOM is not slow -- on the contrary, it is very fast. What is slow is browser reflow, page refreshes, display calculations, etc. In web app development of yesteryear, browser reflow was typically triggered by poorly conceived manual DOM manipulations -- which gave birth to the myth that the DOM itself is slow.

Implementing a virtual DOM and VDOM diffing is just one way to manipulate the DOM more efficiently and intelligently. At my work, we've chosen a different path without the overhead and leaky abstraction of a virtual DOM.

We built our own component-based SPA framework and recently open sourced it ( https://github.com/ElliotNB/nimbly ). Each component must have a definition of what state mutations should trigger what portions of the component DOM (via CSS selectors) to refresh. There's no extra overhead for a VDOM and VDOM diffing at all. The only overhead is accrued ahead of time by the developers who must write a definition of how their component should update in response to state changes. When state does change, the framework bundles up the queued DOM changes between all components on the page, identifies/eliminates any redundant changes and refreshes the DOM in one go.

mruniverse(10000) 1 day ago [-]

> ...seem to be trying to present React's claim of the VDOM faster than the DOM as some kind of toddler mistake.

It wasn't a toddler mistake, but it was repeated over and over to convince people to move to React.

edem(3072) about 17 hours ago [-]

React has nothing to do with this. They just took an idea as old as Windows 1.0 and applied it to web development.

stolsvik(10000) about 16 hours ago [-]

Why don't the credit of 'changing the culture of how we thought about state management on the frontend' go to AngularJS? At least Angular is what changed it for me, and it is the oldest of them.

ljm(10000) about 18 hours ago [-]

I love how many people there are in this thread that somehow avoided the jQuery hell a lot of us battled against.

I was involved in that, mostly from when my paycheque involved throwing together websites in Drupal 6. Holding that architecture together was trouble enough without having to also care about the frontend, we were only scripting it. The idea of the HTML being considered an 'app' was utterly alien to me and my colleagues at that time.

I fondly remember the short period of time where we had post after post attempting to explain what a closure is, because for most of the authors back then the concept of a function pulling outside variables into its scope was utterly alien to us. Even now this practical meme persists[0].

Ten years later and I find closures more intuitive than half the stuff we've concocted in OOP land.

More than that, jQuery was a means to an end and to shove low-effort animation and UI into an app to make it look snazzy and 'Web 2.0' like (glass effect banners, drop shadows and all). If it wasn't jQuery it was script.aculo.us.

Then we got Backbone and Coffeescript at around the same time, by which time I was a Ruby dev. Backbone contributed to a fundamental shift in how we build a frontend, and we had Knockout, Sencha, ExtJS, etc. following along. And then the concept of 'comet' (keeping an HTTP connection alive for long polling) and MeteorJS.

The impact of React and its concept of the VDOM has been phenomenal. It may be overhead as the Svelte authors say, but the experience of working with React, and any similar library in the ecosystem, is a boon to anyone who wants to do serious work in the browser. Without being hyperbolic this feels like the legacy of smalltalk: programming in a dynamic environment, only you're not actually aware that you are.

There has to be a fantastic retrospective on the progression of JS since that initial ten-day genesis.

[0] https://medium.com/dailyjs/i-never-understood-javascript-clo...

acdha(3654) about 19 hours ago [-]

> yes if you architect things well then you could avoid a lot of these - but let's face it, front end developers weren't typically renown for their application architecture skills. ... This is what I understood the claim that VDOMs are faster than the real DOM meant - and the article is pretty much eliding this detail.

I agree that a large part of the problem is the lack of proper architecture and general poor quality of practice but that's also a problem for the distinction which you're attempting to draw. I think the core React team likely meant what you meant but the community's love of both fads and crapping on whatever isn't new and shiny meant that nuance was deeply buried under the "it's go-faster magic from Facebook!!!" marketing train.

I remember having absolutely surreal experiences where it was like "why are you saying it's faster? Here's a benchmark showing it's 5 orders of magnitude slower." "It uses a virtual DOM" "I know, but don't you have a benchmark where it's actually faster?" "You just don't get it".

I do think React helped bring some improvements around architecture but I think an under-appreciated part of that was that since it required a full compiler toolchain, 100% of projects could use the latest JavaScript features (notably ES6 classes and arrow functions), data structures, modules rather than rewriting everything, etc. which noticeably reduced the number of complex things people had to get right, tune, and reason about.

hn_throwaway_99(4016) 1 day ago [-]

> This is what I understood the claim that VDOMs are faster than the real DOM meant - and the article is pretty much eliding this detail.

I disagree with this. I think the major key insight and innovation with React, which this article fully acknowledges, is that it is much easier to think about declarative UI as solely a function of the current state without having to think about the transitions to arrive at that state, and, importantly, the virtual DOM lets you do that performantly.

In other words, to take the example from the article, it would be great if we could have an 'onEveryStateChange() { document.body.innerHTML = renderMyApp(); }' function, but doing that would be much too slow because it would recreate the full, real DOM. Using the virtual DOM lets you write essentially the same code, but in a performant manner, and I think the article is clear on this fact.

I'm not familiar with Svelte, but the article has peaked my interest because it is making it sound like it lets you write declarative UI but without needing to do the full virtual DOM diffing.

toasterlovin(10000) 1 day ago [-]

Not really a JS developer, but from my recollection, everything you said is essentially correct. I was on the periphery of the Ember community when React dropped. I remember a blog post or something where the Ember developers basically acknowledged that React's virtual DOM approach was significantly more performant than what Ember was doing and, to the Ember community's credit, resolved to re-architect their view rendering layer to shrink the performance gap.

jasonkester(2154) 1 day ago [-]

the DOM was often used to store state.

Every once in a while I'm reminded that I'm mostly disconnected from the way 'most' people build things. Thanks for this insight. It finally explains why I hear people talking down about 'jQuery developers', if that was something that people actually did.

But wow. I've been building javascript-heavy web stuff since the mid 90's and it had never occurred to me to do that. You have your object model, and each thing had a reference back to its DOM node and some methods to update itself if necessary. All jQuery did was make it less typing to initially grab the DOM node (or create it), and give you some shorthand for setting classes on them.

It also explains why people liked React, which has always seemed completely overcomplicated to me, but which probably simplified things a lot if you didn't ever have a proper place to keep your data model.

I can't imagine I was the only one who had things figured out back then, though. The idea you're talking about sounds pretty terrible.

Chris_Newton(10000) 1 day ago [-]

In short - the DOM was often used to store state. And this just isn't a very efficient approach.

By some people, sure, but separating state and business logic from presentation and rendering logic was a well-known idea many, many years before React was around.

I think the basic premise of the article here is correct. The important development with React that hadn't previously been widely seen in front-end, JS-based web development wasn't the virtual DOM, it was the declarative description of the rendered content — building it in absolute terms from the current state, not in relative terms from the current and previous state. The virtual DOM is a means to that end: it makes that approach fast enough that it can be used with acceptable performance for a lot of realistic applications.

This doesn't change the fact that the declare-and-diff strategy is extremely expensive compared to actively observing only necessary changes in the underlying state and making only necessary local updates in the (real) DOM. In a typical web app, if there is such a thing, that might not matter very much. In more demanding cases, say when you've got tables with thousands of cells or you're drawing a complicated diagram with SVG, it's still all too easy to run into performance lag with any library that uses this strategy. Then you start using escape hatches like shouldComponentUpdate or using lifecycle methods to manipulate the (real) DOM directly rather than rendering through React, at which point you're not really benefitting from React at all for that particular component (though of course you might still be benefitting from it for the other 90% of your UI code and incorporating the rest into the same overall design using those escape hatches might make sense in that situation).

hopsoft(10000) about 20 hours ago [-]

> the DOM was often used to store state. And this just isn't a very efficient approach.

StimulusJS is a modern approach that uses the DOM to manage state. In my experience it has proven to be quite simple and performant.


austincheney(3680) about 21 hours ago [-]

> was typically done when the virtual DOM approach was introduced.

Don't care and I am guessing you didn't read the article. Ignorance of the DOM does not redefine what it is. This is just as true for jQuery stupidity as it is for React virtual DOM nonsense. Fortunately, the DOM is defined in a standard specification so there is a document of truth that you can go read.

> In short - the DOM was often used to store state. And this just isn't a very efficient approach.

Again, don't care. Other people's misuse and stupidity is their problem. That stupidity does not alter the technology specification.

If you really want to know what the DOM is I wrote a very brief summary with links to the specifications: https://prettydiff.com/2/guide/unrelated_dom.xhtml

delusional(10000) about 21 hours ago [-]

I still do some old school JQuery manipulations, and a lot of what kills you on the performance front is also the repeated manipulations of the same set of elements in the same frame. Often, you go in and change an element. Only to have another piece of javascript change it again. Each of those modifications then require a complete relayout of the page, that's when it gets expensive.

You could say that we should just optimize our JQuery, and you'd be right. We just don't have structured way of figuring out everything that touches a 'component'. (What is a component anyway, when it's all adhoc).

addicted(10000) 1 day ago [-]

This is how I remember things as well. The Virtual DOM was a huge improvement compared to other contemporary frameworks because it consolidated multiple changes into a single DOM operation.

For one thing, all the frameworks at the time we're doing 2 way bindings. Which meant that the smallest change could end up triggering a bunch of computeds and observables, to the point where any change would trigger a bunch of re renders.

React bundled all of those into a single rerender. Further, and I may be mistaken about this, react helped dispel 2 way binding and show that 2 way binding was a performance and reasoning disaster. If that was the case, I'd suppose eliminatin g 2 way binding also likely played a large role in the performance improvements which may potentially have been incorrectly attributed to the VDom.

nojvek(3822) 1 day ago [-]

The ideas of svelte are great. It reminds me of snabbdom thunks and inferno blueprints. If you know the view code ahead of time, you could do plenty of perf optimizations since you exactly what changes and what to react to.

But sometimes I dynamically generate vdom nodes. Like markdown to vdom. There vdom shines. It's a simple elegant idea.

I think svelte is exaggerating a bit.

React and vdom family of libraries are great. Svelte is great too. Not mutually exclusive.

Someone should write a Babel jsx transpiler that does svelte like compile time optimizations for react. At the same time still allows dynamic runtime diffing if needed.

No reason why we can't have best of both worlds.

acemarke(3489) 1 day ago [-]

The React team (and other parts of Facebook) _was_ working on a project along those lines called 'Prepack':


I think work on that has slowed for the time being, but it's got some interesting possibilities.

fnordsensei(3240) 1 day ago [-]

I do this with React all the time as well, though via ClojureScript and Re-frame[1], in which nodes are represented as plain Clojure data structures.

E.g., send an article from the server, formatted in EDN/Hiccup[2][3]. Insert it into a component in the frontend, and it's converted to VDOM nodes. No further logic or conversion required.

[1]: https://github.com/Day8/re-frame

[2]: https://github.com/edn-format/edn

[3]: https://github.com/weavejester/hiccup/wiki/Syntax

computerex(10000) 1 day ago [-]

What a strange post. Yes, virtual DOM is overhead, much like JIT compilation is an 'overhead'. But this overhead ultimately translates to better performance because many virtual DOM transformations can be buffered into 1 transformation of the real DOM.

austincheney(3680) about 21 hours ago [-]

The JIT is not overhead and it is in no way related to the DOM. The JIT is a compiler in a VM. The DOM is a standard data model living in memory.

Rusky(2825) 1 day ago [-]

Better performance than completely recreating the DOM, sure. But all the time spent constructing and diffing the virtual DOM is pure overhead compared to simply doing that 1 real transformation directly.

azangru(10000) 1 day ago [-]

> ultimately translates to better performance

Better compared to what?

For a library like React, which re-renders the DOM tree every time component's props or state change, virtual DOM with diffing and patching is indeed a better approach as compared to naive re-rendering of the whole DOM.

But as Rich Harris said during his talk about Svelte v.3.0, whenever he hears claims about better performance of frameworks based on virtual DOM, illustrated with benchmarks, he runs the same benchmarks with Svelte (not based on virtual DOM), and inevitably gets better results.

EGreg(1721) 1 day ago [-]

I have always said:

Angular is diffing the model

React is diffing the view

That's all. Better to just skip the diffing usually, and grab references to elements and update them when certain events happen. It's really ok!

quickthrower2(1505) 1 day ago [-]

The problem is spaghetti! You have something in the Dom and you might not know what can or has updated it or why...

floatboth(10000) about 22 hours ago [-]


lit-html essentially grabs references to elements automatically when parsing a template. You get react-like rendering without any diffing! And does not require build tools.

beders(10000) about 23 hours ago [-]

It is simply mind-boggling how much effort the JS community has put into working around the performance properties of a document layout engine to make it 'interactive' and 'responsive'.

With only 3 major rendering engines left standing, where is the concerted push to turn these document renders into general purpose, fast, desktop-quality rendering engines?

Back to Svelte vs. React vs. Reagent vs. Vue.JS vs. Angular vs. (insert framework-of-the-month-here)

One common theme seems to be: Run code to manipulate a tree-like data structure (DOM) efficiently. This obviously needs to become: Submit data to the rendering engine to manipulate the tree-like structure. (and in a way .innerHtml is doing that for a sub-tree, but is not suitable for general-purpose tree manipulation)

writepub(10000) about 23 hours ago [-]

> where is the concerted push to turn these document renders into general purpose, fast, desktop-quality rendering engines

The DOM is fast enough for desktop apps @60 or even 90 frames per second, especially of you follow best practices (no framework required)

ng12(10000) 1 day ago [-]

I keep hearing this and find it really hard to care about. Runtime performance is not a bottleneck for me. Once in a blue moon I'll have to optimize a React component with shouldComponentUpdate but otherwise I have no performance concerns even on old browsers.

There are other characteristics that are very, very important like build size. VDOM is not worth thinking about.

Honestly I don't understand Svelte. It sounds like it's very good at the things it does but the things it does are not the things I need.

dsissitka(10000) 1 day ago [-]

If you value build size you might want to take another look at Svelte. Build size is one of its strengths. For example, the Svelte implementation of RealWorld is roughly 10% of the size of the React/MobX implementation:


Edit: Confused the React/Redux implementation with the React/MobX implementation the first time around.

ralphstodomingo(10000) about 21 hours ago [-]

I feel as if Svelte came at the wrong time. These days, when most people know either React or Vue or some other thing, and computing devices are performing better over time, there's diminishing returns on performance optimization. Sure, you do a bit of it, and then you're often better doing something else, like enhancing developer experience for example.

I really like the idea, and will play around with it, but fat chance it's getting into production with me. I am much more productive with React now, and I worry more about business requirements than raw performance (that I almost never worry about these days).

JMTQp8lwXL(10000) 1 day ago [-]

To my understanding, one of (or the singular) author of Svelte was a developer for the NY Times who wanted to easily create visualizations with thousands of data points, and the existing UI libraries weren't cutting it on performance. Depending on the types of applications you build day-to-day, this problem space might be niche. Your standard CRUD app (tables, forms, etc) would not be leveraging Svelete's capabilities; in a sense, it could be a pre-mature optimization.

lenkite(10000) 1 day ago [-]

Personally, I find modern template based approaches like lit-html, hyperhtml/lighterhtml better and faster. And also being far, far smaller. Throw in a CSS framework like bulma or tailwind-css and you are good to go at a smaller footprint and better performance.

floatboth(10000) about 22 hours ago [-]

And lit works without any build tools!

fpoling(3531) about 23 hours ago [-]

Direct manipulations of DOM are expensive. It is vastly more cheaper to create or update JS object than create or manipulate DOM node. So the claim that VirtualDom is always an overhead is not true. The diff algorithm can give a set of DOM operations that are less expensive than typical sequence of manual mutations. So virtual DOM can be faster if savings from less DOM operations are bigger than extra JS work.

Surely carefully crafted direct DOM mutations will be the fastest approach, but it typically leads to hard to maintain code.

atan(10000) about 6 hours ago [-]

I'm not sure you understood the article. The Svelte compiler does in fact generate code that performs 'carefully crafted direct DOM mutations,' though it is not hard to maintain, because the compiler handles it. Given code that already knows exactly which DOM updates to make, virtual DOM would indeed be pure overhead.

kanonk(10000) about 23 hours ago [-]

Seems like the author - and a lot of people here - have failed to realize this. This is the real benefit in DOM manipulations with VDOM.

osrec(3233) 1 day ago [-]

We implemented a library at my company that does not use a virtual DOM, but instead captures reactive 'change functions'.

The framework captures dependencies between the reactive 'change functions' and underlying variables, and executes the functions whenever a variable's value changes. You can also have dependencies between variables (like computed vars in Vue), and the lib works out the correct order for calculation and execution. All the change functions get queued up and applied in order before the next repaint.

Everything is component based, and there is even a nice kind of inheritance (with lazy, async component loading).

It works rather well. I'd be happy to share it with anyone that's interested! Not open sourcing it yet, as I envisage it would be a full time job to support it!

Edit: the upside of the change functions is that YOU decide how the DOM is updated. It's really quite cool to be able to implement a function like the following and have it run to update this.$dateOfBirth whenever this.data.dateOfBirth changes:

That is of course a simple example, but when you need even more control, reactive change functions have proven to be super useful (for us anyway!).
markharper(10000) about 17 hours ago [-]

That sounds similar to the incremental lambda calculus described in this paper: https://arxiv.org/abs/1312.0658 There's an implementation for DOM updates in Purescript (https://blog.functorial.com/posts/2018-04-08-Incrementally-I...), but I haven't come across a similar approach in Javascript yet.

saggas(10000) 1 day ago [-]

I have never used modern JS frameworks like Angular, React and Vue and I have always assumed (hoped) that they contained optimisations that you would be unlikely to use in your vanilla JS code even though you could. Something like FastDOM which batches read/write operations to avoid unnecessary reflows. Do they contain anything like that?

xtagon(10000) 1 day ago [-]

I'm not sure if this is exactly what you mean, but Ember.js has a concept called a 'runloop' which batches different actions into queues, which does seem to help with rendering/reflows.

zeugmasyllepsis(10000) 1 day ago [-]

To varying degrees depending on the library, I believe the answer is 'yes, they sometimes do'. Angular and Ember at least have systems for batching user interactions, which translate into model updates, and thus potentially DOM updates. I believe the respective systems for handling this are called Zone and Backburner for Angular and Ember, but I've been out of touch with those projects for a couple of years.

There's definitely a trade-off, however. They make a huge difference for noisy events (like mouse move, scrolling, dragging, etc), but tend to make debugging much harder in my experience. When things go wrong, the stack traces nest deeply into the event handling systems and code paths no longer resemble the relatively straightforward world of traditional event calls, where a callback handler is invoked directly in response to a single event.

kevingadd(3694) 1 day ago [-]

Virtual DOM is a shaky long term bet since it's essentially betting that the cost of DOM operations will always be high enough to justify all the work you're doing (both at runtime and at trying-to-figure-out-how-to-write-this-code time). When it's easy to do your virtualization, you can just go 'well this is an optimization i can remove at any point', but if it's suddenly so complex it introduces bugs, you're in trouble.

Naturally, the cost of DOM operations didn't go unnoticed and while people have been going in on virtual dom solutions like React, the devs of Firefox, Chrome and Safari have all been aggressively optimizing the dom - making the native code bits faster and moving more of the DOM into javascript so all your JS can get inlined and optimized. It gets harder and harder for libraries to compete with regular DOM as a result.

striking(555) 1 day ago [-]

react-dom (https://github.com/facebook/react/tree/master/packages/react...) is shipped as a dep of react, and seems to be where all the heavy lifting is. I believe that, should DOM ops become fast enough that you don't need vDOM anymore, you could easily no-op all of this out with direct DOM calls.

mantap(10000) 1 day ago [-]

Surely JS engines are also optimising React-style code. e.g. the article says that react style code does a lot of unnecessary object creation (e.g. map) but if that is now a common pattern then JS engines can do a lot to optimise that away.

addicted(10000) 1 day ago [-]

Well, you could also potentially have a future where the VDom is the primary Dom in itself.

As in, the VDom doesn't need to be converted to HTML for the browser to render, but rather, the browser directly converts the VDom objects into Pixels.

tannhaeuser(2996) 1 day ago [-]

DOM operations (node insertions, deletions) are trivial. It's the CSS reflow/relayout that's expensive. Though that can be trivially solved by preparing a shadow DOM off-screen, then finally replacing the changed fragment by the newly build-up one. DOM diffing is just a convenient method to do it.

pier25(3416) 1 day ago [-]

The cost of the DOM operation is the same. The difference is that React batches the changes to reduce the number of operations whenever possible.

Sawamara(10000) 1 day ago [-]

I was running some quite complex UI systems in several of my projects with vanillaJS, cached DOM elements, all that jazz. Nothing like VDOM. Then eventually, it started to finally bog me down. Like rendering an inventory in an RPG system where you can buy stuff from the vendors: I started to get dissatisfied with change operations that lasted upwards to 2-3ms in a bad day. Then I started caching even more DOM elements, and started to build local data structures to see which was rendered last,and to what, to avoid rerendering everything, and to optimize to the change-based render only.

A few weeks of this, and it dawned on me that had I needed to generalize my solution, I would have arrived at the exacty same model that hyperscript does for its diffing, and something similar to whats underneath React's diffing method (or Preact since I prefer that, but they share the API).

So yeah, virtual dom is just a more clever and straightforward way to map your state to the dom, identifyng exactly where the changes happened, and only updating those nodes, instead of doing any queries towards the dom api (costly, can cause rerender,like when checking for bounding boxes, etc).

It IS more useful because you no longer need to maintain a hyper-specific update function per project and manuallí created/maintained differs.

Mikushi(10000) 1 day ago [-]

Not to be grating but your problem sounds more like using the wrong tool for the job. DOM is not made to render video game UI, it is a bad tool to do so as you discovered yourself.

DigitalSea(3667) about 20 hours ago [-]

This is why I use Aurelia. It's a Javascript framework many here have probably never heard of or used, it debuted in 2015 and I have been working with it for four years now. Sadly Aurelia debuted at the height of the React hype and soon after, Vue hype.

Rob Eisenberg (the man in charge of the Aurelia project) had the right idea straight out of the gate. A reactive binding and observation system that worked like a virtual DOM (isolated specific non-destructive DOM operations) without the need for an actual virtual DOM. Which allows you to use any third-party library without worrying about compatibility or timing issues with the UI.

This is one area where React falters, at least when I used it. third party libraries clashed with the virtual DOM. When you start introducing abstractions to solve imaginary problems caused by improperly written code (the myth of the DOM being slow) you introduce issues you have to battle later on as your application scales.

Tobani(4129) about 20 hours ago [-]

The default behavior for javascript interacting with the DOM is incredibly slow once the page gets complicated enough. I've certainly seen it first-hand. This may not be a problem you have, and indeed maybe not everybody needs react. But the problems things like react/vue/whatever solve (correctly or not) isn't imaginary.

spankalee(3051) 1 day ago [-]

This is absolutely true.

Virtual DOM diffs do a huge amount of unneeded work because in the vast majority of cases a renderer does not need to morph between two arbitrary DOM trees, it needs to update a DOM tree according to a predefined structure, and the developer has already described this structure in their template code!

A large portion of JSX expressions are static, and renderers should never waste the time to diff them. The dynamic portions are clearly denoted by expression delimiters, and any change detection should be limited to those dynamic locations.

This realization is one of the reasons for the design of lit-html. lit-html has an almost 1-to-1 correspondence with JSX, but by utilizing the static/dynamic split it doesn't have to do VDOM diffs. You still have UI = f(data), UI as value, and the full power of JavaScript, but no diff overhead and standard syntax that clearly separates static and dynamic parts.

The syntax is very close:


   render(props) {
     return <>
       <h1>Hello {props.name}</h1>
          ? <ol>{props.items.map((item) => 
          : <p>No Items</p>}

   render(props) {
     return html`
       <h1>Hello ${props.name}</h1>
          ? html`<ol>${props.items.map((item) => 
          : html`<p>No Items</p>`}
I really think the future is not VDOM, but more efficient systems, and hopefully new proposals like Template Instantiation can advance and let the browser handle most of the DOM updates natively.

edit: closed JSX fragment as pointed out

dmitriid(1980) about 24 hours ago [-]

> I really think the future is not VDOM, but more efficient systems, and hopefully new proposals like Template Instantiation can advance

Template Instantiation is like a half of a half of 1% 'advance' in the best case scenario. It's being rushed forward despite the fact that no one sat down and listed all the benefits vs. all the downsides of implementing it in the browser.

What browsers do need is a declarative DOM API and a native 'DOM as a function of state' which renders the whole instantiation proposal moot, and at the same time actually advances the browser as a platform.

There's a discussion on GitHub which in my opinion is going nowhere because TI is viewed as unquestionable good https://github.com/w3c/webcomponents/issues/704

foobarbecue(4069) 1 day ago [-]

Typo -- you didn't close your fragment on the JSX.

ec109685(4105) 1 day ago [-]

One thing nice about React is that it can take care of quoting for you depending on the method call the jsx template translates into (attribute, value, element name). String templates doesn't have that nice property.

lacampbell(4124) 1 day ago [-]

Do you use lit-html?

The idea is interesting. It looks more procedural than say preact, but I also appreciate the directness.

xtagon(10000) 1 day ago [-]

Svelte's philosophy on turning the virtual DOM concept inside out sounds like it has merit, and is very promising. But it's going to take a lot more than that, in my opinion, before a large number of people consider switching from React, Ember, etc.

I don't see that as a drawback, I see it as an open opportunity for Svelte to keep building out on improvements other than the DOM updates, and catching up with everything else the SPA alternatives provide that have nothing to do with the virtual DOM.

For example, Ember is just a joy to work with, and makes it easy to rapidly prototype reactive frontends in a way that reminds me of Ruby on Rails's initial appeal to developer happiness, and the tooling is very mature. If you could unlock all those benefits while keeping the blazing fast DOM updates, oh boy!

hatch_q(10000) about 15 hours ago [-]

Tooling is the key word here. Svelte simply doesn't have the tooling needed for any big project. - testing/testability (unit-tests are easy, but what about functional, e2e?) - strong-typing support (flow, typescript) - good IDE support? - i18n? ICU support, etc? They need to redo what ember-intl or react-intl do.

Without these things it's simply not viable to start bigger projects with new framework.

wnevets(10000) 1 day ago [-]

I remember when react was new and the reason why everyone should use it was because of its virtual dom and lack of typescript.

Now everyone is saying the virtual dom isn't the point and advocating typescript.

ec109685(4105) 1 day ago [-]

Seriously, who actually said lack of typescript was a plus for React ever? That seems like a non-sequitur.

pier25(3416) 1 day ago [-]

After years of doing mostly React and Vue SPAs I've never experienced a performance problem.

The metrics of Svelte that interest me more are lines of code and bundle size, both in which it excels.

Here is an article that compares the exact same project called RealWorld written in a number of front end libraries/frameworks.


Here is the main repo for the RealWorld project for front and backend:


pan_4321(10000) 1 day ago [-]

Same experience here but with React and Angular (2+). No real world differences that can be observed by an average user in the average app.

The choice has to be made by ecosystem, programming style, etc.

React as a library offers not enough for me personally. Angular is too heavy on concepts. Vue seems to hit the sweet spot?

QuadrupleA(3852) 1 day ago [-]

So glad to see this article, I've long wondered how this 'virtual DOM is faster' myth got accepted as gospel when clearly it's pure overhead, compared to a well written app that updates the DOM directly only when needed (which I find is easy to accomplish in most apps).

Can't speak to the svelte approach due to inexperience with it, but good to see this myth challenged - react.js is fine but I worry there's been a cargo cult mentality around it, that it's The One True Modern Way To Do Web Apps, when really it's a tradeoff that involves some extra layers and performance baggage, and like any tool you need to weigh the pros and cons.

Tade0(10000) about 24 hours ago [-]

react.js is fine but I worry there's been a cargo cult mentality around it,

And also an actual personality cult directed at some key people.

mruniverse(10000) 1 day ago [-]

Yeah, you're right. But then you get others now trying to say 'No no no, you have it all wrong. We didn't really say VDOM was faster. You misunderstood.'

nailer(502) about 23 hours ago [-]

> compared to a well written app that updates the DOM directly only when needed (which I find is easy to accomplish in most apps).

> Can't speak to the svelte approach due to inexperience with it

Heya! I'm been using Svelte for the last week for a new project - knowing React has the lion's share of community right now, but feeling like Svelte is where things are going to be.

Regarding: 'compared to a well written app that updates the DOM directly only when needed' - exactly! Svelte actually does this for you. Given the following Svelte code:

    age = 7;
That just updated anything bound to 'age' in the DOM. No set() or setState() or whatever. Or for an array:

    favoriteFoods = favoriteFoods;
That just updated the DOM for anything bound to 'favoriteFoods'.

The whole point of Svelte is that it takes your input JS, and builds the well written app as output!

It's very easy to pick up and I like it a lot.


lacampbell(4124) 1 day ago [-]

compared to a well written app that updates the DOM directly only when needed (which I find is easy to accomplish in most apps)

Do you do full blown SPAs with this technique? I mean I'm sure it's possible, but I wonder how difficult it is.

I wouldn't use (p)react for a website that just needed a bit of AJAX, but I find it a bit hard to imagine doing an actual app with vanilla JS.

andrewfromx(2810) 1 day ago [-]

tell me about it! Or, heaven forbid, just make the user get the entire html page rendered from server after every click like it's 2003 or 2004!

vfc1(3131) 1 day ago [-]

Angular also works in a somewhat similar way, there is also no virtual DOM.

Instead, the modern compiler is used at build time to generate what looks like a change detection function and a DOM update function per component.

These functions will detect changes and update the DOM in an optimal way without any DOM diffing.

However, because Javascript objects by default are mutable, after each browser event Angular in its default change detection mode has to check all the template expressions in all the components for changes, because the browser event might have potentially triggered changes in any part of the component tree.

If we want to introduce some restrictions and make the data immutable, then we can check only the components that received new data by using OnPush change detection, and even bypass whole branches of the component tree.

This is the current state of things, for the near future Angular is having it's internals rebuilt in a project called Ivy.

One of the main goals of Icy is to implement a principle called component locality.

Ivy aims at getting to a point where if we change only one component, we only have to recompile that component and not the whole application.

I think the article puts the focus on the wrong thing. The current change detection and DOM update mechanisms made available by modern frameworks virtual DOM or not are more than fast enough for users to notice, including on mobile and once the application is started.

What we need is ways to ship less code to the browser, because that extra payload makes a huge difference in application startup time.

2T1Qka0rEiPr(10000) about 24 hours ago [-]

Thanks for the write up - it's a very succinct explanation of how Angular works in comparison.

I found the original article to be a really good read, and the Svelte approach in general seems rather neat. I do however find that in this current front-end framework sphere, there seems to be a huge amount of religiosity and one-upping going on.

I hear routinely (on-line and off) developers vocalising some anti-[jQuery,angular,etc.] mantra, which to be honest saddens me. Yes the jQuery approach was flawed in so many ways in comparison to the modern frameworks. Yes Angular 1.x was flawed in many ways compared to what we have on offer today. But those tools were still great improvements on what we had before (for anyone who knew the DOM-API standardisation nightmares pre-jQuery, or state management / testability woes pre angular/react).

Svelte may take us down the next path, and if it allows us to produce better, smaller, more testable code then it has my full backing. But I think as a community we need to strive to be less polarising - from my perspective its likely to be mostly reductive, and lead to even more JavaScript fatigue.

fauigerzigerk(3128) about 24 hours ago [-]

>The current change detection and DOM update mechanisms made available by modern frameworks virtual DOM or not are more than fast enough for users to notice, including on mobile and once the application is started. What we need is ways to ship less code to the browser [...]

I wonder how it affects battery usage though. Downloading the code doesn't happen as often as running the code, if it's really an app and not content needlessly packaged as an app.

underwater(10000) 1 day ago [-]

> The original promise of React was that you could re-render your entire app on every single state change without worrying about performance. In practice, I don't think that's turned out to be accurate. If it was, there'd be no need for optimisations like shouldComponentUpdate (which is a way of telling React when it can safely skip a component).

It's shouldComponentUpdate(), not shouldDOMUpdate(). Even if DOM operations are direct, or the virtual DOM is infinitely fast, there are plenty of situations where you want to avoid running application code on every update.

Some frameworks use data binding to track if a component update is necessary. This is what Svelte does, but because there is no explicit checks they have some weird conventions around annotating certain bound values:

        export let num;
        $: squared = num * num;
React just happens to implement this behaviour different: it assumes a component needs updating unless the shouldComponentUpdate() hook says otherwise. The advantage (ironically) is that React is 'just JavaScript', whereas Svelte needs a compiler that can instrument the code.

This design decision shouldn't be confusing to the author; I assume he would have made this design decision consciously?

JMTQp8lwXL(10000) 1 day ago [-]

Well-written React code shouldn't need to leverage the shouldComponentUpdate API. The application structure would be off, if that's the case.

nemothekid(10000) 1 day ago [-]

Virtual DOM is pure overhead*

* Compared to doing static analysis and optimizing your UI updates at build time.

While I certainly agree that svelte's approach may be the future, I think React and others, are very much a needed stepping stone (especially when you consider all the work done transpiling JS code).

The Virtual DOM was the most performant solution that applied generally to many a large number of cases. The reason almost everyone did `x.innerHtml = html` is that it was the most general and widely available solution.

floatboth(10000) about 22 hours ago [-]

No, you don't need to do anything at build time! (You don't even have to have a build time.)

You can just... instantiate a template, remembering where the 'holes' are, to get precise update functions for every data field that gets inserted into the template. This is what lit-html does, and it's such an obvious approach I'm really surprised that VDOM took off before it.


Lowkeyloki(4121) 1 day ago [-]

First, I think anyone using React solely because of the virtual DOM implementation is largely missing the point. IMHO, the real win of React is the functional and composable way components can be designed and implemented.

Second, no disrespect to Svelte, but I think there's a huge trade-off between the React approach and the Svelte approach that developers should be aware of. React is a pretty unopinionated library, all things considered. The only compilation step necessary is JSX to Javascript. JSX maps pretty directly to React's API. This means compilation is pretty simple. So much so that you can do it by hand really easily if you really wanted to. Svelte, on the other hand, is pretty compilation-heavy. There's a lot of what I'd consider to be non-trivial transformation going on between the code you pass to the Svelte compiler and what comes out of it and runs in the browser. Personally, I'm less comfortable with that compared to React's runtime library approach. But if you are comfortable with that trade-off, that's perfectly fine. It is worth being aware of it, though.

atan(10000) about 7 hours ago [-]

> There's a lot of what I'd consider to be non-trivial transformation going on between the code you pass to the Svelte compiler and what comes out of it and runs in the browser. Personally, I'm less comfortable with that compared to React's runtime library approach.

I initially had a similar concern, but so far, the opposite appears to be true. The Svelte compiled code is quite readable and easy to follow, and because there is no runtime, it's much easier to walk through exactly what is happening. With a complex runtime, it can sometimes be difficult to figure why something isn't working as expected without having a deep understanding of the runtime codebase.

kabes(4096) 1 day ago [-]

Virtual dom is an implementation decision for performance a developer shouldn't even be very aware of. The main upside to react is that it has a huge ecosystem.

nailer(502) about 23 hours ago [-]

> Svelte, on the other hand, is pretty compilation-heavy. Personally, I'm less comfortable with that compared to React's runtime library approach.

Svelte compiles, React runs at runtime, that's true.

I've spent the last week (and weekend) doing the UI for a new project in Svelte. The compiler approach is pretty rad as it seems to catch more errors before I test them in browser.

You can download any project from the https://svelte.dev/ tutorial / online REPL and it'll have a rollup file, watching files, compiling them and telling about broken code.

vscode also has a plugin for Svelte components that shows pretty underlines while you work. The compiler approach means I see more warnings faster and save time.

namelosw(10000) 1 day ago [-]

It always bugs me when I'm using a framework with custom HTML templating language (Angular, Vue or possibly Svelte), it's never clear what's the differences between them.

It's almost a new language but similar every time, with different pitfalls -- an ad-hoc, informally-specified, bug-ridden, sometimes slow implementation of half of HTML and half of JavaScript.

For example, a framework Foo does not have the concept 'else' at all in HTML template. Another framework Bar has an 'else' like <div bar:else='expr' />, but the scope of else is totally different from another framework Baz or JavaScript itself.

JSX on the other hand, is straightforward -- when you open a curly bracket, it's just JavaScript expressions -- map, condition, lexical closure, everything works out of the box.

dahfizz(10000) about 19 hours ago [-]

> There's a lot of what I'd consider to be non-trivial transformation going on between the code you pass to the Svelte compiler and what comes out of it and runs in the browser. Personally, I'm less comfortable with that...

How is this different than the 'non-trivial' transformations that V8 makes to actually compile and run your code? Does svelte do unpredictable / unexpected things? Don't you make runtime calls to the react lib where they can do whatever they want? I'm genuinely confused.

I don't care one way or the other - I'm not a web dev. It seems from this comment that you're just scared of compilers, which is strange. No matter what you're relying on third party libs in your code. Why is it somehow safer for that third party code to be used at run time rather than compile time? I would probably argue the opposite. Why the strong aversion to compilers?

adzm(4070) 1 day ago [-]

Note that even jsx is not technically required, and on occasion I've clenched my teeth and written non-jsx react code for some one-off demos.

jordache(10000) 1 day ago [-]

>functional and composable way components can be designed and implemented.

Ughh.. that's the point of all modern FE frameworks...

You are putting that description on a pedestal as if that is a unique property of React.

fouc(3980) 1 day ago [-]

Isn't it possible to skip the compile step in react, by using hyperscript instead of JSX?

revskill(3894) 1 day ago [-]

To me, React is more about Developer Experience. The ease to reason about the app is far much more important than anything else.

The problem with Svelte, is that, it sounds 'words louder than action'.

You can learn from React documentation. Instead of throwing a bunch of concept to the dev face, it brings out the WHY of ReactJS with real code.

Teaching users right from documentation is the best way to introduce a library/framework for user to really experience with the tech.

rahimnathwani(2702) 1 day ago [-]

If you like the React documentation with real code examples, you'll love the Svelte tutorial with both code examples and a live playground. The UI is beautiful, too:


The examples are also very clear:


thoman23(10000) 1 day ago [-]

2 things I'm not seeing in the article or in the comments so far:

1) The virtual DOM is an abstraction that allows rendering to multiple view implementations. The virtual DOM can be rendered to the browser, native phone UI, or to the desktop.

2) The virtual DOM can, and should, be built with immutable objects which enables very quick reference checks during the change detection cycle.

spankalee(3051) 1 day ago [-]

There are other ways to represent a UI as data that don't require a diff. JSX's default compiler output throws away information needed to do efficient updates, and instead requires diffing the entire old and new trees.

Immutable objects may optimize for checking for data changes, but only if you do that, as in shouldComponentUpdate or checking inside render(). They don't optimize the _diff_, which is done against the DOM.

chrismorgan(3410) about 22 hours ago [-]

1: there's no reason at all why VDOM should be the abstraction over the multiple view implementations; there's no need: it's all duck typed, so make DOM (or at least the subset of it that Svelte will generate) the abstraction that other things must implement. I believe this is how Svelte Native works.

Furthermore, as a compiler, Svelte is well placed to render to multiple implementations, efficiently—though implementing it is likely to take more effort if you're dealing with a different-shaped API. This is demonstrated by the fact that Svelte has two compilation targets at present. First, dom, which is designed for client-side DOM operation; and secondly, ssr, server-side rendering, which is based on emitting strings, and never constructs a DOM.

2: even if you can do things that way, you're still doing more work than is necessary, because you're calling the whole render method and performing even simple comparisons that just aren't necessary. VDOMs render methods are allocation-heavy, because they deliberately create new objects all over the place. In the words of the article, which I assert does deal with this, albeit obliquely: virtual DOM is pure overhead.

kgwxd(2857) 1 day ago [-]

On point #2, ClojureScript not only provides immutability out of the box, but also has libraries for replacing JSX with the same stuff everything else is built with. It's an insanely beautiful way to work with React.

Historical Discussions: Facebook has struggled to hire talent since the Cambridge Analytica scandal (May 16, 2019: 630 points)

(631) Facebook has struggled to hire talent since the Cambridge Analytica scandal

631 points 4 days ago by Despegar in 3314th position

www.cnbc.com | Estimated reading time – 6 minutes | comments | anchor

Facebook is still reeling from the fallout from its Cambridge Analytica scandal more than a year ago, as multiple former recruiters say candidates are turning down job offers from what was once considered the best place to work in the United States.

More than half a dozen recruiters who left Facebook in recent months told CNBC that the tech company experienced a significant decrease in job offer acceptance rates after the March 2018 Cambridge Analytica scandal, in which a data firm improperly accessed the data of 87 million Facebook users and used it to target ads for Donald Trump in the 2016 presidential election.

This impact to Facebook's recruiting efforts is important as the company adds thousands of employees each year. These new workers are key to the company's ability to innovate and improve its existing products. The company faces cutthroat competition for talent from other top tech companies like Google, Apple, Amazon, Microsoft and countless start-ups.

Most notably, Facebook saw a sharp increase in students at top universities who are declining the company's job offers.

Among top schools, such as Stanford, Carnegie Mellon and Ivy League universities, Facebook's acceptance rate for full-time positions offered to new graduates has fallen from an average of 85% for the 2017-2018 school year to between 35% and 55% as of December, according to former Facebook recruiters. The biggest decline came from Carnegie Mellon University, where the acceptance rate for new recruits dropped to 35%.

This drop has been echoed elsewhere in the company's recruiting efforts.

The company has seen a decline in acceptance rates among software engineer candidates for its product teams. Those teams have seen their acceptance rates fall from nearly 90% in late 2016 to almost 50% in early 2019, according to one recruiter who left recently.

Facebook spokesperson Anthony Harrison said the company's head count grew 36% year over year from the first quarter of 2018 to the first quarter of 2019. Facebook disputed the accuracy of the recruiters' accounts, but declined to point out any specific points that were wrong.

After the publication of this story, Harrison contacted CNBC to say "these numbers are totally wrong."

"Facebook regularly ranks high on industry lists of most attractive employers," Harrison said in a statement. "For example, in the last year we were rated as #1 on Indeed's Top Rated Workplaces, #2 on LinkedIn's Top Companies, and #7 on Glassdoor's Best Places to Work. Our annual intern survey showed exceptionally strong sentiment and intent to return and we continue to see strong acceptance rates across University Recruiting."

In general, Facebook candidates are asking much tougher questions about the company's approach to privacy, according to multiple former recruiters.

This decline has put pressure on Facebook recruiters to fill a backlog of open positions under more difficult pressures than they've faced previously, the recruiters said.

"Usually half of the close is done for recruiters with the brand Facebook has," one recruiter who left in 2019 said. "This is the first time a lot of our folks have had to be on top of their game to make sure top candidates don't slip through the cracks."

This drop in candidate interest follows other signs that show more tech workers are reconsidering working for Facebook. In December, former Facebook employees told CNBC that they were hearing from more current Facebook employees who were reaching out to ask about job opportunities elsewhere. And in April, executives at health-tech start-ups told CNBC that poaching employees from Facebook has become easier after the company's scandals.

The scandals have impacted candidate interest and have hurt morale among recruiters.

"The biggest thing that impacted people at Facebook is that we found out information at the same time as the general public did," one recruiter who left recently said. "It was like, 'Wait, shouldn't one of our leaders have told us about this first versus our parents or friends reaching out?' It was a shock."

Facebook has lost candidates to other top companies like Google, Microsoft and Amazon, which can offer salaries and signing bonuses as good as Facebook's with much less scandal, recruiters said.

Facebook has also lost candidates to start-ups that are gearing up to go public or have just gone public, such as Airbnb, Slack, Uber and Lyft, the former recruiters said. Several candidates have also opted for high-potential start-ups, including Robinhood and Stripe.

The company's scandals are not the only reason for Facebook's new recruiting struggles, the recruiters said. There are a number of other reasons, including the skyrocketing cost of living in the Bay Area and the cutthroat competition for talent among top tech companies.

Some candidates have said they're not interested due to concerns about the general culture of the company as well as its leadership. Others still say they don't want to be involved with the company that was responsible for electing President Donald Trump or that they don't want to work for a company that has Peter Thiel, a notable investor and staunch Trump supporter, on its board.

Other candidates, especially students, are also passing on Facebook's offers because they simply don't use the company's apps as much as previous generations and are therefore less inspired to work for the company.

"The privacy scandals, the Cambridge Analytica stuff — students aren't as interested in going to Facebook anymore," a former Facebook university recruiter said.

WATCH: Here's how to see which apps have access to your Facebook data — and cut them off

All Comments: [-] | anchor

zippzom(4114) 3 days ago [-]

Personal anecdote: I had a job offer from Facebook and a couple other big tech companies. The Facebook offer was substantially better fiscally than the other ones and it was clear to me that they were having trouble hiring. Their initial equity grant has no cliff and the signing bonus was massive for somebody two years out of school: $75,000 cash in first paycheck.

However I ultimately turned it down because of ethical concerns about working there combined with a sense that people would not approve of my job choice. I.e. even if I don't find what they're doing ethically questionable (and I do, although I don't think they're so bad), I didn't want to have to explain myself or defend them to everybody when I mentioned where I worked. Just my two cents as somebody who was one of the 50% of candidates who turned down the job.

mpeg(10000) 3 days ago [-]

Kudos for sticking up for what you believe in, but if I was you I would have taken the job, that kind of money that early in your career can shape your future massively.

I'm fairly senior and I would still do some really unethical stuff for a $75k signing bonus.

Panini_Jones(10000) 3 days ago [-]

> Their initial equity grant has no cliff and the signing bonus was massive for somebody two years out of school: $75,000 cash in first paycheck.

* Google got rid of the cliff too. * The $75K sign-on bonus is nothing new. These are not signals that we're having trouble hiring now.

> However I ultimately turned it down because of ethical concerns about working there combined with a sense that people would not approve of my job choice. I.e. even if I don't find what they're doing ethically questionable (and I do, although I don't think they're so bad), I didn't want to have to explain myself or defend them to everybody when I mentioned where I worked. Just my two cents as somebody who was one of the 50% of candidates who turned down the job.

Honestly, working at FB as a SWE is awesome. Like beyond awesome. If impressing other people is what you're optimizing for, you do you, but just know that you're missing out big time.

azangru(10000) 3 days ago [-]

An anecdotal piece of evidence.

There's this senior engineer, by name of Jafar Husain. Used to work at Microsoft, where he learned RxJS (from Matt Podwysocki, I shouldn't wonder). Then he moved to Netflix and brought his RxJS know-how there. Worked there as a senior software engineer for about 7 years. Authored the Falcor library (a graphql competitor in the early days of graphql). Then, in October 2018, he joined Facebook.

And he is a very talented engineer :-)

throwaway_9168(10000) 3 days ago [-]

So he worked at MSFT till ~2011 (before they became open source friendly), and joined Netflix ~2011 (when they announced Qwikster [1]) and joined Facebook just a short while after the Cambridge Analytica scandal broke out? He may be talented, but boy does he have to work on his timing.

[1] https://theoatmeal.com/comics/netflix

allthecybers(10000) 4 days ago [-]

This is not a surprising headline. If you have values about privacy, decency, civil discourse, honesty or integrity you wouldn't want to work there. Also, if you feel the company was collusive or willingly complicit in the dissemination of fake news and Russian propaganda efforts during our elections, it'd be a big fat "no" to working there. And it's not just our democracy that is undermined by FB. There's a litany of abuses that they have either been horribly naive too or downright negligent in addressing.

If you are bright-eyed optimistic about Facebook I'd be interested to hear your counterpoint to all of the scandal. I don't think there is any company in the FAANG that is an altruistic enterprise but it isn't surprising that FB would have a decline in hiring.

nostrademons(1654) 3 days ago [-]

> I don't think there is any company in the FAANG that is an altruistic enterprise

I feel like Google started that way, and then lost its way sometime between 2009-2012.

Projects like Google Scholar, Google Books, Google Summer of Code, Google Reader, Google Open Source, Google.org, and pulling out of China didn't really have much of a business justification, but were simply something good that they could do. Unfortunately they're a public company, and when you start struggling to meet analysts' (perpetually inflating) estimates, being good - or at least not evil - is usually the first thing on the chopping block.

merpnderp(10000) 3 days ago [-]

Forget the election, just what social networking is doing to young people's minds. They're making money by making a lot of people miserable - just not how I'd want to make a living.

Farradfahren(10000) 3 days ago [-]

The reason why facebook is no longer interesting as a hire- is - that they are already there. They are the microsoft of social communication. If you want to push new ideas in social media, you open a company and the company gets bought by facebook, if succesfull. Those that float around in that sort of mothership, are usually bureaucrats and legacy maintainers, plus a health dose of suits, who dont care what sort of buisness they actually are in, as long as the numbers grow. Why in the world would one want to go there?

seisvelas(4103) 3 days ago [-]

I have no money and a very shitty laptop, and thanks to Google Colab's free, hosted Jupyter Notebooks I'm having a blast learning Keras.

I'm not saying they're saints, but they've given me something free that's improved my life. Maybe it's ultimately greedy in the sense that later if I need a cloud platform I'll definitely use GCP. But I think that kind of mutualism is actually better in practice than altruism.

djohnston(10000) 3 days ago [-]

it's surprising because it's totally false. read the above comments.

astazangasta(10000) 3 days ago [-]

>Also, if you feel the company was collusive or willingly complicit in the dissemination of fake news and Russian propaganda efforts during our elections, it'd be a big fat "no" to working there. And it's not just our democracy that is undermined by FB

Come on. According to FB the IRA had 80000 posts over a two year period. In the same period there were 33 trillion FB posts. What moron still believes this garbage?

FB was hung out to dry by Congressional democrats too spineless to own up to their own pathetic failure to defeat Trump.

neves(4131) 3 days ago [-]

Don't forget about WhatsApp. It was the main channel of dissemination of fake news in Brazilian Election. Now we have a global warning denier in the presidency and Amazon deforestation is reaching record levels.

Sure there isn't any company in the FAANG that is an altruistic enterprise, but to be only pure evil one is Facebook.

What really impresses me is that there's still a lot of talented people working there.

mtgx(144) 3 days ago [-]

Forget Cambridge Analytica. There have been several reports about how badly they've been treating their executives. Why would anyone else trust them after those reports?

decebalus1(10000) 3 days ago [-]

do you think the rank-and-file employees give a damn about how they're treating the executives?

julianozen(3635) 3 days ago [-]


Basketb926(10000) 3 days ago [-]


bogomipz(2438) 3 days ago [-]

I can't help but think of Zuckerberg's famous quote 'young people are just smarter.'[1]

I guess they are Mark.

[1] https://www.forbes.com/sites/stevenkotler/2015/02/14/is-sili...

imsofuture(4085) 3 days ago [-]

The older I get, the funnier that quote gets.

uerobert(10000) 3 days ago [-]

Correction: Facebook has struggled to hire talent [from top universities] since the Cambridge Analytica scandal.

akhilcacharya(4105) 3 days ago [-]

Everyone knows if you didn't go to a top school you don't matter /s.

Anecdotally nobody who went to my undergrad got a new grad offer and then declined it at FB. Because most of them just can't get offers there, and if they do they can't get equivalent ones.

Glyptodon(4011) 3 days ago [-]

My career is and has always been defined by a refusal to work for companies I'm not ethically comfortable with. Hopefully it becomes more of a norm.

wnmurphy(10000) 3 days ago [-]

This is a luxury and a privilege many people can't afford, and I am right there with you in exercising it consciously.

jjuhl(1081) 3 days ago [-]

The sooner FB just dries up and dies, the better (IMHO). What do we need that junk for?

praneshp(2806) 3 days ago [-]

Seconding asciident' comment here, I can count at least 3 different things that had a positive effect on me, and 0 negative ones, in the last week.

I do agree that you probably don't need this junk.

asciident(3981) 3 days ago [-]

I get a lot of value from Facebook, and it greatly improves my life, even just this past week. I found out about some career opportunities through friends' posts. I saw photos of old friends that made me happy. I got answers to questions I had about a specific uncommon musical instrument I had via a Facebook group. I searched for people who would be in a city I was visiting, to message them to meet up while I was there. I learned news about acquaintences having kids or moving between cities. I read some backstory about a city hall decision that made me more informed about what particular politicians have been focused on and why. I bet I'm in the majority actually, and people who get nothing from it are probably already not using it.

jfasi(3856) 3 days ago [-]

I just spent three months hiring in NYC, and now that I think about it, I haven't seen a single person mention they were considering counteroffers from Facebook. For context, Facebook and Google are the two largest tech companies with a significant NYC presence. It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.

> Usually half of the close is done for recruiters with the brand Facebook has

I'm also finding that company brand plays a huge role in closing candidates. Our company's brand is generally pretty strong, and I've found one of the things candidates respond to most is the story we tell about our company's past, present, and future. Facebook's story has become 'we were founded by a jerk who didn't care about privacy, our not caring about privacy has had massive consequences for American and global society, and our promises to improve our approach to privacy in the future have proven to be disingenuous smokescreens.'

It's no wonder the substantial portion of people who care about their employer's ethics are turned off.

elgenie(4124) 3 days ago [-]

IIRC, Google places a much higher emphasis on making counteroffers in the first place, as well as making those counteroffers hard to refuse.

tanilama(10000) 3 days ago [-]

It is not only ethnical.

With SO much negative press, I feel that Facebook had lost its mission among wider public. If it is net bad for the society, even just the perception of it, it is hard to hire someone who shared that vision with you, only mercenaries.

Good people are weird, though. They work for money, like everyone else, but not just money.

clairity(4051) 3 days ago [-]

> 'It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.'

intersting anecdote. google is a bigger concern for privacy and personal liberty, yet jobseekers are shunning facebook because of the more wide-ranging negative press.

bradlys(10000) 3 days ago [-]

And, yet, here in the bay - my company (a startup) sent out two offers to candidates quite recently and they both went to FB instead.

There is no shortage of people joining FB because there's no shortage of people wanting to join a big company. Maybe if they're all comparing offers between big companies then they'll join some other big co but if the difference is startup vs Facebook... FB wins.

amelius(883) 3 days ago [-]

> It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.

Perhaps they were ashamed to admit it (?)

telltruth(4130) 3 days ago [-]

> It's no wonder the substantial portion of people who care about their employer's ethics are turned off

Nope. People who I know have turned down FB offer was purely because they see them as less stable company and have doubts if their stock will keep falling. No one wants to wake up a month later to find out that their signing bonus just got reduced by 10% due to bad news cycle. I would estimate that less than 10% of people turn down employer due to privacy related ethics. Also, on side note, FB has jacked up stock bonuses for existing employees. Their attrition rate is virtually unaffected despite of all the bad news.

filoleg(10000) 3 days ago [-]

Anecdotal, but in the past year, I had tons of recruiters from Google/Amazon/etc. knocking on my LinkedIn box. However, not a single one from Facebook. Maybe they just simply didn't fund recruiting efforts as much as the other tech companies or weren't hiring as aggressively.

stubish(4121) 3 days ago [-]

There was a time when recruiters would put on a sheepish and embarrassed-to-bring-it-up look when mentioning the higher paying jobs they had for tobacco companies. Paid more, but few wanted the social stigma, even if their personal ethics were OK with it.

sizzle(688) 3 days ago [-]

In my experience, Facebook used to be a cool thing to be on when you were documenting college party shenanigans and sharing pictures with friends, before it reached mass adoption to the point that your parents/grandparents were trying to add you as a friend. This was a time when organizing/sharing pictures with friends digitally was not a straightforward process.

I've come to terms with a simple fact of life that after graduating, it gets harder to make friends as you get older and start to settle down away from your college towns. Most of the acquaintances I've added on Facebook might as well not exist as we don't talk offline and my core circle of friends communicate over imessage/sms or various chat apps and we try to make time to see each other, further cementing our friendships offline.

Another thing that bothers me about Facebook since I first joined around the time a .edu ending email address was required (I think?), is that everytime I visit the site the new interface and feature bloat makes it feel less and less like what made it dead simple to connect with people back in earlier times. The current experience for me consists of a noisy ad infested newsfeed, ultra-optimized to inject itself straight into your brain's reward center with statistically significant A/B tested precision and autoplaying clickbait media nonsense, all while functioning as an echo-chamber for long-lost acquaintance's political outrage spam.

I wonder if people from my age cohort feel similar cognitive dissonance and that's why Facebook isn't even on their mind career wise, cause it's like an ancient digital museum that houses dusty pictures from their younger years and has long been replaced by Instagram.

Anyone out there relate?

Balgair(2628) 3 days ago [-]

There is also an issue with the 'evaporative' effect. If no one who works there is seen as 'ethical', then you'd expect the people that do work there to be unethical/dubious. So, trying to get a promotion is then more cut-throat, the lunch crew has a few more 'jerks', the HR is a bit more biting, etc. Your hackels get raised and you are more suspicious of the motivations (however begnin) of others. Better to just not get involved.

krageon(10000) 3 days ago [-]

> one of the things candidates respond to most is the story we tell about our company's past, present, and future

I hear this storyline fairly often (though exclusively from corporate recruiters) and I have a super hard time understanding why this would matter. Can someone who actually listens to this kind of (IMO) propaganda weigh in and help me understand why it matters to them?

zjaffee(10000) 3 days ago [-]

I agree with everything you've said until the last part. Google is only marginally better than fb when it comes to some of these issues of privacy. The issue people have with facebook is that it has a reputation for being a pressure cooker.

algaeontoast(10000) 4 days ago [-]

I actively turned down a facebook offer out of college, however, boy does facebook throw money at people who seem to turn a blind eye to online ethics...


Kinnard(3105) 4 days ago [-]

Is it possible that ppl who work at facebook have ethics that merely differ from yours?

5trokerac3(4006) 3 days ago [-]

Facebook's brand is tainted enough now that smart engineers don't want any of the bleedover into their personal brand that would come from working there.

How many engineers, in hiring positions, do you know that have a positive opinion of FB?

malvosenior(2859) 3 days ago [-]

I would hire an engineer from Facebook any day of the week. For the most part the devs are excellent. I would not let my personal opinion of a product stop me from hiring a great candidate.

ma2rten(2754) 3 days ago [-]

Yes, I actually have an overall positive opinion of Facebook in terms of engineering talent. Facebook has made some questionable decisions, but I would not automatically assume that everyone who works there is so immoral that you shouldn't hire them.

Facebook AI research also has some of the top AI researchers.

malandrew(2866) 3 days ago [-]

I know lots of engineers, in hiring positions, that have a positive opinion of FB.

They still do great engineering and someone working there will learn a lot and take those skills to their next job.

camjohnson26(4129) 3 days ago [-]

Ok but how many engineers use React, yarn, or graphql? Facebook is still leading the way in front end development and it's still a net positive to have that name on your resume. Their brand isn't any more tainted than Google, Microsoft, or Amazon.

hasbroslasher(10000) 3 days ago [-]

I think your point about the personal aspect of work is most relevant. I personally would be mortified to work at a company that was always in the news for various scandals and generally being full of shit. Not that I'd assume anyone who works there to be full of shit - just that I wouldn't want my friends making fun of me for working for the Zuck. Facebook just isn't cool anymore in my neck of the woods and there's no social capital in using it or working there as far as I can see.

freetime2(10000) 3 days ago [-]

It's not easy to build an app at the scale of FB. If I saw FB experience on a candidate's resume, I would view that as a positive.

skybrian(1785) 3 days ago [-]

If there will be fewer people who go to work at Facebook who care about privacy, that seems like bad news?

JohnFen(10000) 3 days ago [-]

I don't think that people who care about privacy could possibly affect what Facebook does by working there.

redwards510(4131) 3 days ago [-]

As much as everyone wants to believe this is because all the applicants are suddenly taking strong ethical stances, I bet it has more to do with Facebook simply not being considering cool or exciting anymore.

drugme(2940) 3 days ago [-]

And the root cause of its suddenly 'not being considering cool or exciting anymore' would be?

paxys(10000) 3 days ago [-]

Sure, but one of the biggest reasons it isn't considered cool or exiting anymore is all the negative press.

rhizome(4094) 3 days ago [-]

I think this story is submarine PR paid for by Facebook to garner sympathy.

return1(4131) 3 days ago [-]

Its ML research is exciting. I would like to work with Yann Lecun

trailingZeroes(10000) 3 days ago [-]

Agreed. I would argue that Facebook is not considered cool as a direct result of all the outrage surrounding it.

heyyyouu(4045) 3 days ago [-]

Yeah, it's seen as the platform your parents (or worse, grandparents) use. Pretty much a step above Next Door. Why would you want to work for that over some of the other companies out there?

HillaryBriss(1954) 3 days ago [-]

As a business, this is not a big problem for FB. FB will still find plenty of talent.

What they will find less abundant is 'top talent,' whatever that means. I doubt FB actually needs that much 'top talent' to continue successfully, anyway. They're not performing brain surgery on a rocket.

qmanjamz(10000) 3 days ago [-]

I agree. Amazon has been hiring mediocre talent for the last two decades and it hasn't slowed their growth at all.

nabla9(657) 3 days ago [-]

They have to pay more. Median monthly pay for intern: $8,000


what_ever(4129) 3 days ago [-]

That's been the common intern pay for a while at FAANG companies. Heck I got paid more than $8k at a non-FAANG company back in 2012.

natrik(10000) 3 days ago [-]

Facebook has began scaling back on hiring since the Cambridge Analytica scandal.

Alternative possible headline working just as well. People are still applying en masse to work at Facebook.

Among top schools, Facebook's acceptance rate for full-time positions offered to new graduates has fallen from an average of 85% for the 2017-2018 school year to between 35% and 55% as of December.

A fall in acceptance rates may mean saturation in needed roles at Facebook.

JohnFen(10000) 3 days ago [-]

> A fall in acceptance rates may mean saturation in needed roles at Facebook.

No. If Facebook didn't need those people, they wouldn't have extended an offer to them in the first place, so it wouldn't have affected acceptance rates.

lallysingh(4001) 3 days ago [-]

Then Facebook shouldn't be giving out offers or interviewing so many people?

I think these are primarily offers given to interns. Otherwise, why go through the interview process if you don't want to work there? It may be a better place to go for internship than for full time.

Then again, maybe their compensation packages may no longer be competitive. It's possible.

esoterica(10000) 3 days ago [-]

I think you're misinterpreting the term acceptance rate. It's the number of offers extended that we're accepted, not the number of applicants who were extended offers.

valleyjo(10000) 3 days ago [-]

Yea that's a dismal stat. Roughly, It means smart people don't want to work for you.

dymk(10000) 3 days ago [-]

> After the publication of this story, Harrison contacted CNBC to say "these numbers are totally wrong."

> "Facebook regularly ranks high on industry lists of most attractive employers," Harrison said in a statement. "For example, in the last year we were rated as #1 on Indeed's Top Rated Workplaces, #2 on LinkedIn's Top Companies, and #7 on Glassdoor's Best Places to Work. Our annual intern survey showed exceptionally strong sentiment and intent to return and we continue to see strong acceptance rates across University Recruiting."

Perhaps it's best not to take a couple ex-recruiters word as blanket truth about company wide trends.

Of course, the article simply mentions this then goes straight back to asserting company wide morale problems, which is an interesting narrative to pursue, when that's not really what the majority of employees are feeling (which is further reflected by strong hiring numbers and low engineer attrition).

JohnFen(10000) 3 days ago [-]

Also, every one of the metrics he cites there are of dubious value in terms of real-world meaning.

decebalus1(10000) 3 days ago [-]

Hmm... the headline is in direct contradiction with what I read on Blind. People are still flocking at Facebook's gate for an offer.

mrep(4033) 3 days ago [-]

Ha, first thing I thought of when I saw a Facebook struggling to hire post is blind. If any reason is causing people to not interview with facebook, I would argue it was the weekly burnout thread that was happening on blind last year.

throwaway55554(10000) 3 days ago [-]

Ok, but Blind is filled with the type of people who would fit right in at FB.

mrkstu(10000) 3 days ago [-]

The difference being, the very best, with multiple offers, are choosing another path.

Its like the very best high school students with offers from Harvard, Princeton and Yale- but suddenly Facebook is being treated as if it's Dartmouth or Cornell instead of Princeton and being left with the leavings of the very best instead of having its pick. Still very talented people to be sure, but not the 'best.'

karthikb(10000) 3 days ago [-]

I have seen cold emails from Facebook and Instagram recruiters recently and they all start on the defensive about privacy, how it's 'Zuck's' big thing and how he's taking it seriously. Seems a little desperate.

dylan604(4115) 3 days ago [-]

Does anyone actually believe Zuck's new found interest in privacy? His entire company is built on the sharing of data (even in ways users don't understand).

_hardwaregeek(10000) 3 days ago [-]

As a current student, I'm actually surprised by this. Maybe I just hang out with evil people, but I don't get the impression most young programmers care that much about ethics. Or they claim they do, but then the 6 figure salary, cushy benefits and signing bonus wins them over. Perhaps there's other reasons?

I do joke with my friend who works at Bloomberg that the 'evil' finance view has now flipped completely. Bloomberg is a pretty ethical company compared to Facebook, Google, etc.

bendoernberg(3665) 3 days ago [-]

Why do you hang out with people who don't care about ethics?

dilyevsky(10000) 3 days ago [-]

It's not just ethics - they have shit wlb and cut-throat culture. If you top it off with mountains of tech debt due to above and brain-dead hiring practices it's not that surprising they have trouble hiring despite huge compensation.

meesles(4067) 3 days ago [-]

What you think you want while in classes and the shifting realities once you enter the workforce and be pretty night-and-day.

Personally I would have entertained the idea at working at a 'Big 4'-type company out of school knowing that they were ethically opposed to me. I guess mostly because the name on a resume is worth multiple other jobs in some scenarios and gives you an advantage over your peers.

Just a few years later, a 6-figure salary doesn't seem that outrageous and unique to those big corporations. Now that the difference is only in the $10s of thousands, these decisions become a little more nuanced and uncertain. Besides that, once you hit the high 5-figures, money becomes less and less of a driver in your life unless you are in desperate need due to your circumstances.

My point is that I think fresh graduates will put aside their ethical qualms because they don't yet know their worth and place in the workforce. That can change pretty quick.

That being said, plenty of people just don't care, and that's why these companies still have thousands of developers. It's easy to be blissfully ignorant of your contributions to privacy degeneration and corporate takeovers of our lives when 'all you do' is write some React components or speed up some data pipelines. The executives and managers are the ones who will really need to reckon with their consciences, knowing they implemented all these nasty programs.

Yizahi(10000) 3 days ago [-]

In my average size telecom company almost nobody cares about privacy, at least it seems so any time I mention anything related or link some article in the chat. Or at least they don't see an issue there. (same with equality problems, but that's different topic) It seems programmers are just an average sample of population.

gspetr(10000) 3 days ago [-]

Age is a proxy for putting value into such concepts as 'ethics'. When you're more or less senior and have your basic needs met, then you can afford to be picky on what you work. Fresh grads might not care because it takes to know evil to know good (see the story of the original sin and Tree of the knowledge of good and evil), and work experience is like a separate life experience.

Heck, I've heard a theory that you should only be counting programming years as life years, i.e. if they haven't been programming for 18 years, then they aren't adults in the world of software. And the funny thing is, once you're at least a teenager by this definition, then you start thinking that they might actually be onto something...

michaelgrosner2(10000) 3 days ago [-]

I work in finance (a trading firm, not a bank). In a way it's freeing to admit to yourself and to your coworkers that all you really care about is making money. No one's deluded thinking they're changing the world by selling options like FAANGs think they are by harvesting personal data.

It's not like we're making the world better but we're not actively harming it.

dontbenebby(3958) 3 days ago [-]

Ironically working for a bank is probably often more moral / does more good than Google. Keeping people's money safe is a very real service. People complain about CC interchange fees, but it costs ~1-2% to process cash as well. (Safes, hiring armored trucks to pick it up, etc).

Not all of finance is swapping debt like a commodity til the economy crashes and forclosing on people who had some bad luck.

Allowing people to use their money conveniently and securely is arguably bringing more value to the world than helping run psyops to convince people to buy things they don't actually need. People need checking, savings, and credit card accounts and they need them to be secure and reliable

hn_throwaway_99(4016) 3 days ago [-]

> Or they claim they do, but then the 6 figure salary, cushy benefits and signing bonus wins them over.

But the issue is that top recruits can get a 6 figure salary, cushy benefits and a signing bonus from Netflix or Google or Amazon or AirBnB etc. etc. It's still easy to at least to pretend to be moral of you can shun Facebook as an employer but don't need to give anything up to do so.

JohnFen(10000) 3 days ago [-]

> Or they claim they do, but then the 6 figure salary, cushy benefits and signing bonus wins them over.

Which means that they don't care about ethics.

warp_factor(10000) 3 days ago [-]

The general feel I get from most of my engineer friends is that Facebook's product is definitely not inspiring and doesn't 'make the world a better place' (whatever that means).

But the consensus is that given the right amount of money, they would all accept an offer from them. And Facebook is known to pay very well.

subparwheat(10000) 3 days ago [-]

It's pretty true that the pay trumps them all. But Google is generally pretty good at matching Facebook's offer. So when choosing between FB and Google, or any of FAANG, the pay is usually less of a concern. So being considered 'not inspiring' and 'evil' is definitively not helping FB to close on the candidates.

varelse(10000) 3 days ago [-]

AMZN<FB<GOOG<Wall Street in my experience for AI skills. #OneDatapoint

ykhoury(10000) 3 days ago [-]

I deleted my Facebook profile 2 months ago. Best decision ever, but now I'm just wasting more time on Twitter haha.

silversconfused(10000) 3 days ago [-]

Why stop at facebook? Nuke twitter and join mastodon!

freedomben(2649) 3 days ago [-]

I've known several people that would no longer work for Facebook, but the Cambridge Analytica isn't the biggest concern. It's the fact that they are censoring people, even within private groups.

I have a friend that jokingly said (in a private group) that men are vile pigs. We knew she was joking - it was good natured. Yet, Facebook issued her a warning and removed her post and threatened her with a ban. First they came for Alex Jones and I said nothing because I don't like Alex Jones (and think he's insane), but now that the precedent is set that Facebook is the speech police, it will expand to us all (especially with their machine learning advancements that are here and yet to come).

The EFF has a really important article about this that I implore everyone to read[1].

[1] https://www.eff.org/deeplinks/2018/01/private-censorship-not...

pertymcpert(10000) 3 days ago [-]

How did FB find out about her post?

fossuser(3835) 3 days ago [-]

For a detailed nuanced piece about how FB handles some of this complexity check this out: https://www.vanityfair.com/news/2019/02/men-are-scum-inside-...

FB has its problems, but I generally find the negative press overstated and wonder if Zuck's approach to interact with the press and congress actually backfires (compare to the other companies which largely ignore them). I appreciate how often he talks to the press to explain what they're trying to do though.

I also see the Cambridge Analytica scandal as what it is - permissive APIs that were abused and then locked down. Cambridge Analytica is to blame in this for abusing TOS and behaving badly, FB is arguably negligent - but I think the reaction is extreme.

Plus from people I know inside FB there really is a huge funded effort to stop abuse and manipulation via 'integrity' teams. It'll be interesting to see how they modify things given Zuck's recent pivot towards focusing on privacy as a core feature.

wnmurphy(10000) 3 days ago [-]

They pay well, it's a good brand name to have on your resume, but on principle, I ignore any recruiter from Facebook.

It's glorified MySpace that exists to build de facto detailed psychological profiles on unsuspecting participants, and it's specifically engineered to manipulate their behavior. No thanks.

robertAngst(3933) 3 days ago [-]

Same feeling about the recruiters from Tesla.

It sounds horrible to work there, there are lots of companies that make cars.

bwasti(3675) 3 days ago [-]

I'm not sure how the journalist fact checked this, but in 2016 CMU sent 12 people to Facebook[1]. In 2018 CMU sent 27 people to Facebook[2].

[1] https://www.cmu.edu/career/documents/2016_one_pagers/scs/scs... [2] https://www.cmu.edu/career/documents/2018_one_pagers/scs/1-P...

wsetchell(10000) 3 days ago [-]

Or 29 if you include WhatsApp.

niceonedude(10000) 3 days ago [-]

Those are just SCS numbers. The CNBC article cites all of CMU. Facebook recruits from the math, engineering (EE), and info systems programs as well.

username90(10000) 3 days ago [-]

Those numbers are almost perfectly inline with the growth of Facebook, 17k employees in 2016 to 36k in 2018.


JohnFen(10000) 4 days ago [-]

Wow! Every so often, I see something that makes me feel hopeful about the future. This is one of those times.

malandrew(2866) 3 days ago [-]

I wouldn't get too optimistic. If the types of candidates that can make a positive social impact on the direction of the company don't join the company, that leaves Facebook with only those candidates joining that will continue to make it commercially successful without the positive social impact.

Basically, if you're unhappy with a Facebook with talented do-gooder employees, just wait until we've got a Facebook with only talented employees that don't care about do-gooding.

bloopernova(10000) 3 days ago [-]

I feel bad for the Facebook employees below middle management level, but I also really would like some harsh penalties directed at Zuckerberg et al.

I hate that huge corporations can just kind of shrug and say 'oops' to egregious crimes, without any meaningful consequences. Same with corporations not paying tax while still benefiting from the stable society created by those taxes.

gambler(3909) 3 days ago [-]

>Facebook candidates are asking much tougher questions about the company's approach to privacy, according to multiple former recruiters.

This narrative is highly suspicious.

Zukerberg openly and repeatedly said that he doesn't care about anyone's privacy for well over a decade[1]. The whole company is built around collecting and selling private information. Why would people who care about privacy interview with Facebook in the first place?

[1] https://www.theguardian.com/technology/2010/jan/11/facebook-...

dredmorbius(230) 2 days ago [-]

I'd responded, years ago, to a recruiter's contact email with Zuck's 'dumb fucks' IM comment.

Their response was 'people change'.

Evidence is that this one hasn't. I stand by my response at the time.

mithr(10000) 3 days ago [-]

I think that over the past couple of years in particular, the real-world consequences of all of this have really come into the spotlight.

It's one thing to hear a tech CEO talk about something you may not agree with -- many people just categorize it as 'a Facebook thing' (as in, huh, maybe I'll try to use their products less) and move on with their day. It's quite another to come to the realization that non-trivial parts of (what many see as) seriously negative political consequences have come from these products and, being fully aware of these, the CEO/company still hasn't meaningfully acted.

And with all the recent publicity (there's a difference between being mentioned in the technology section of a paper, and giving a congressional testimony), pretty much no one can say anymore that they aren't aware of it, or haven't thought about it.

throwawaymath(3357) 3 days ago [-]

Mark Zuckerberg does not say he doesn't care about anyone's privacy in the article you cited 'openly and repeatedly' or otherwise. I suppose you can infer that from the actions of the company he runs, but your citation does not support what you've said here. Someone reading your comment without reading the entire Guardian article could come away with an incorrect impression of what he's publicly said.

simonh(10000) 3 days ago [-]

Maybe they're just more aware of what Zuckerberg is saying now than what he has said and done in the past.

quantumsequoia(10000) 3 days ago [-]

The whole Facebook sells data thing is something I see said a lot, but haven't seen anything show. Is there any evidence of this?

Wouldn't selling data be detrimental to their business model of being the golden goose for showing ads to people interested in your product? If people get user data, they can better target people themselves rather than paying Facebook to do it

pawelk(10000) 3 days ago [-]

> Why would people who care about privacy interview with Facebook in the first place?

I believe this may come from the responses to cold e-mails. A recruiter working for FB presents an offer. They want to tell them to GTFO and they highlight the privacy concerns in the response to say thanks, but no thanks.

At least that's what I do when a recruiter working for a company I find morally incompatible approaches me. I reply with something like 'The tech stack looks great and my professional experience aligns with what the job description requires. However I don't think I'd feel comfortable working for a <short-term loans | kids gambling | personal data mining> company, but I'm open to hear about similar positions in other areas if you had any in the future.'

seisvelas(4103) 3 days ago [-]

Perhaps developers now value privacy more on average than they did back then.

subparwheat(10000) 3 days ago [-]

I believe this is pretty true, cause I was interviewing with FB and I brought up some of those questions.

Before the scandal broke out, I didn't really know much about Zukerberg's view on privacy. The scandal definitive rose my awareness on this topic.

But the questions for me is less on FB's approach to privacy, but more on how much is Zukerberg dictating the company. In other words, how much are FB employees empowered to do what's right, to fix their problems. Empowerment and autonomy is a very important for tech talents. FB is not presenting itself too well in this perspective.

bognition(3990) 4 days ago [-]

Honestly I think its more than just that, Facebook is no longer the cool start up building the world's favorite website. They're a multi-national advertising mega corp, and TBH most people just don't want to work there.

seppin(10000) 3 days ago [-]

Since 2014 yes

tobtoh(3797) 3 days ago [-]

But Apple, Google, Microsoft aren't the cool start-ups either and they are mega-corps, yet people still want to work there. So I don't believe that line of reasoning holds up.

lallysingh(4001) 3 days ago [-]

Is anyone?

ppeetteerr(4132) 3 days ago [-]

Totally agree. There are dozens of companies that aggregate user information (Google amongst them). Like @bognition said, FB lost its cool a while ago.

jarjoura(4033) 3 days ago [-]

FWIW, I think this past year everyone has been expecting Uber, Lyft, Airbnb and Pintrest to unlock a flood of money. Anecdotally, a researcher friend of mine turned down FB to work in Snap Inc, research and someone else's wife also turned down FB to work at Lyft went to get in before the IPO specifically and no other real reason. So as much as it makes a good story about ethics or privacy concerns, I think it's lots of things, but probably very little about Cambridge Analytica.

robertAngst(3933) 3 days ago [-]

If US 'Defense' is any indicator, just pay more money.

Someone is willing to kill people for more money.

holoduke(10000) 3 days ago [-]

I believe (strong personal assumption) that this is something not only for Facebook, but also for Google and Apple. Back 3 years ago, a lot of people from my network had dreams to work for Google or Facebook . Today it's no longer the case. They demand a company who is serious about things like privacy, environment. Things like career possibilities are still important, but so many companies these days offer similar work experience. Surprisingly I hear more and more positive news coming from old evil: Microsoft.

quantumsequoia(10000) 3 days ago [-]

Nearly every criticism of Google also applies to Microsoft. Google considered a censored search engine in China. Microsoft actually has one. Google had a contract with the military but didn't renew it. Microsoft actively has one. Microsoft products also collect large amounts of data on you, and provide less privacy than Google

I can't imagine anyone who no longer wants to work at Google wanting to work at Microsoft

pat2man(3791) 3 days ago [-]

Isn't Apple serious about privacy and the environment?

return1(4131) 3 days ago [-]

The self-righteous people should be alarmed by this: this means facebook will be hiring even worse scum and do more evil things. Save the world from evil facebook, go work for them.

50656E6973(10000) 3 days ago [-]

Let it die, this accelerates the process

Historical Discussions: South Korean government to switch to Linux: ministry (May 18, 2019: 593 points)

(604) South Korean government to switch to Linux: ministry

604 points 1 day ago by jrepinc in 786th position

www.koreaherald.com | Estimated reading time – 1 minutes | comments | anchor

The government will switch the operating system of its computers from Windows to Linux, the Ministry of the Interior and Safety said Thursday.

The Interior Ministry said the ministry will be test-running Linux on its PCs, and if no security issues arise, Linux systems will be introduced more widely within the government.

The decision comes amid concerns about the cost of continuing to maintain Windows, as Microsoft's free technical support for Windows 7 expires in January 2020.

The transition to Linux OS and the purchase of new PCs are expected to cost the government about 780 billion won ($655 million), the ministry said.

Before the government-wide adoption, the ministry said it would test if the system could be run on private networked devices without security risks and if compatibility could be achieved with existing websites and software which have been built to run on Windows.

The ministry's digital service bureau chief Choi Jang-hyuk said the ministry expects cost reductions through the introduction of the open-source OS and also hopes to avoid building reliance on a single operating system.

By Kim Arin ([email protected])

All Comments: [-] | anchor

superos(10000) 1 day ago [-]

I switched 20 years ago. What took them so long?

WilliamEdward(10000) 1 day ago [-]

uh you're just 1 person?

conrmahr(10000) 1 day ago [-]

maybe they should use Kali?

conrmahr(10000) about 17 hours ago [-]

tough crowd, just a joke.

AFascistWorld(10000) 1 day ago [-]

China has this 'original' OS called Redflag Linux, it's basically repackaged Linux to rake in government money, sells to military for some 500 usd per pc, the kickback must be big.

chillacy(10000) 1 day ago [-]

And yet nothing is as common as pirated windows. Even one of the displays in an airport crashed and there was a bsod

thiago_fm(4124) 1 day ago [-]

I doubt this will work out. They also tried that in many countries, they all go back to using Windows. I hate Windows, but I can envision how much money AND time it's gonna cost them to move out, and I'm sure that they will either run out of money before they do the transition, or machines would have already dominated the earth before they are done with this 'transition'l

I would believe it much more if they said that they will try to use more Linux, try to mix them up, or create APIs for everything etc. Nowadays it's so easy to create services and use LDAP or whatever thing you want with whatever programming language or service. You can host it on Linux and start changing some computers and systems to Linux.

But 'switch from X to Y' seems a terrible idea. Even as a personal computer user, for me, it's impossible to 'switch'. Imagine for a government. It's a hilarious statement, one of those that politicians do say, but they have no fucking clue of what they are saying.

yjftsjthsd-h(10000) 1 day ago [-]

There's also the sheer political force that Microsoft brings to bear against Linux every time this kind of thing comes up.

jcelerier(3913) 1 day ago [-]

> They also tried that in many countries, they all go back to using Windows.

uhh... no, here in France a lot of public infrastructure runs on linux.

nisa(3925) 1 day ago [-]

Munich did it this way: 100% Linux, no compromises - they didn't even had an central Active Directory so if you required Windows due to software in your part of city government you were on your own - no central updates! They also forced everyone on an ancient OpenOffice 3.x version with KDE 4.x and KMail? That Munich migrated back has less todo with Linux or Windows or Microsoft politics (maybe..) but more with badly run infrastructure and in-fighting and big egos there. Sadly. That there is such a huge group of Linux proponents that cry foul and suspect some conspiracy is not really helping...

ducttape12(10000) 1 day ago [-]

I remember hearing years ago IE usage was crazy high in South Korea because they had laws in place requiring all online banking customers to install an ActiveX control. If that's true... Then they've come pretty far.

AFascistWorld(10000) 1 day ago [-]

Most Chinese banks still do, some even require you to install a service, some don't support Chrome.

hatsunearu(3837) about 18 hours ago [-]

OK so Korea has this weird system which says most banking/e-commerce transactions must be authenticated by a home-grown, country wide asymmetric crypto system. Basically to do commerce stuff online, you need to get a public key certificate that you can sign your online activity with from either the government or a banking institution.

The crypto nerds (including me) might think this is a splendid idea--perfect security!

The downside: they did this to shift the blame to the consumer if something goes wrong. If there is a credit card fraud, the consumer is automatically at fault by default, since obviously the consumer mishandled the public key crypto file. This makes sense, but this also means no chargebacks and other nasty side effects.

Also, this also meant that you had to install a bunch of ActiveX verging-on-malware-ware to get your banking shit settled. Now it supports Chrome and Firefox but holy crap, they install a ton of weird malware protection software and I'm pretty sure some of them take a cut from your computer's performance and stability. I'm just glad they don't display fucking ads.

bootlooped(10000) 1 day ago [-]

Even after ActiveX was on the way out they still had some pretty wacky practices. Korean websites requiring users to install proprietary security software was not uncommon.

tooltalk(3841) 1 day ago [-]

> ... as Microsoft's free technical support for Windows 7 expires in January 2020.

they are still on Windows 7? ouch!!!

cvnyw(10000) 1 day ago [-]

Lighter and easier to manage than 10... why not?

samfisher83(2460) 1 day ago [-]

I lot of people prefer 7. I am not a fan of the forced upgrades or saas model. I don't want to upgrade which breaks this or that and you can't not upgrade unless you have the pro version.

I think a lot of people were fine with 7.

dictum(3469) 1 day ago [-]

Maybe I got older and life lost some color, but the difference between Windows 10 and Windows 7 doesn't feel like the difference between 7 and XP. Seems like most of the action (outside gaming and Office power users) has moved to browsers and browser wrappers.

maxheadroom(10000) 1 day ago [-]

>they are still on Windows 7? ouch!!!

If you think that's painful, wait until you find out about Windows XP[0]. :)

[0] - https://www.windowslatest.com/2018/04/04/windows-xp-is-still...

babakandishmand(10000) 1 day ago [-]

Curious if they'd roll out their own distro.

thewhitetulip(3370) 1 day ago [-]

That will be too much work. Why not use well maintained existing distros?

kazinator(3735) 1 day ago [-]

'Let's switch to something developed under the free world's values that we violently repress here, and whose originators we would likely have jailed had they started their projects here.'

wongmjane(10000) 1 day ago [-]

I think we are talking about South Korean government, not North Korea...

0xb100db1ade(3837) 1 day ago [-]

Did you mean to say that about North Korea, not South Korea?

oregontechninja(4127) 1 day ago [-]

At this rate, it really feels like the next version of Windows will be derived from some Unix os. Windows 8 and onwards has been an unmitigated IT disaster. Windows Server is meh. I once ran Ubuntu mate with a windows theme on a computer terminal in a computer lab, and everybody clamored to use it since it was faster, better at printing, and never bluescreened. (That particular model was forced into 'obselecence' by windows 10).

Just throw a custom graphical shell over a hardened Unix os, then use Valve's proton project for application comparability. You could be so rich using other people's work Microsoft.

la_barba(10000) 1 day ago [-]

Just as another anecdote to yours, I regularly get months of uptime with my Windows 10 dev box. Its been nothing but rock solid for me (so was 7 and 8 before that).

techntoke(4131) 1 day ago [-]

Agreed, yet their goal is to build their own crappy version so that people think Linux sucks, when in reality it is still MS. I blame their culture though. No one who appreciates quality open source wants to work there.

thewhitetulip(3370) 1 day ago [-]

I hope that Indian govt switches to Linux

_emacsomancer_(3558) 1 day ago [-]

I think the way forward here is getting more Linux support at lower levels, including the state-level. There has been some progress on this front, especially in Kerala, and also in Assam: https://en.wikipedia.org/wiki/Adoption_of_free_and_open-sour...

gcells(10000) 1 day ago [-]

Boss Linux developed by CDAC comes to mind https://www.bosslinux.in/

Although the penetration is quite low. The project looked stalled few years back, seems like they are back again. Last release is dated 28.8.2018

Yeri(10000) 1 day ago [-]

I used to support our office in S. Korea.

It's true they depend super heavily on ActiveX and other weird applets/software that runs only on Windows and/or IE.

A lot of the communication with the government (yearly tax submission) as well as banking software would not run on our Linux or OSX devices, and so we had these 'loaner' windows 'tax machines' that got reimaged every few days that people could use to do their business.

They also have their own Office-like suite by Hancon[1], that is a pain to support and only runs on Windows. There were attempts to have gDrive to be able to read the files but afaik that never worked out.

[1] https://en.wikipedia.org/wiki/Hancom

cbhl(3408) 1 day ago [-]

If I remember correctly, the ActiveX widget provided home-grown encryption, because in the 90s, 40-bit encryption was the most you could export from the US.


mycall(10000) 1 day ago [-]

Did they try WINE or ReactOS?

kijin(3961) 1 day ago [-]

The word processor by Hancom is actually pretty nice, especially for composing Korean-only and mixed Korean-and-English documents. I've been using it since before I even knew that MS Word existed, and I still prefer it to MS Word for anything that contains CJK characters. The proprietary file format is a PITA, though.

Hancom did release a Linux version of their office suite a few years ago, and regularly updates the OSX version. Moreover, Hancom has been dabbling in Linux for over 20 years. They even maintained their own Red Hat-based distro at some point. They also support OpenDocument and OOXML fairly well. In short, Hancom has been hedging their bets very thoroughly. So if the government adopts Linux by any chance, it won't be difficult for them to go fully cross-platform and keep their lucrative government contracts.

bachmeier(3750) 1 day ago [-]

It was actually worse than that when I lived in Korea. It wasn't just Windows that you needed, it was Korean Windows. I thought I'd be okay because I was running dual boot machines with Linux and Windows. Nope. Windows purchased in the US simply did not work with many of the sites. (My understanding is that most software written for commerce/government sites had Korean people that purchased a Windows computer in Korea as the only audience they thought about.)

jmkni(4009) 1 day ago [-]

I guess running those ActiveX controls under Wine is a non starter?

slyu(10000) 1 day ago [-]

I wonder if this is something that can be accomplshied at a scale, particually thinking in the context of maintaining upgrade readiness and update compliance for all the devices.

techntoke(4131) 1 day ago [-]

Absolutely, it is very easy to automate too. Arch has a great package manager BTW.

jacquesm(43) 1 day ago [-]

Cue a concerted effort by MS salespeople to make them an offer they can't refuse. See also: Munich.

maximus1983(10000) 1 day ago [-]

That is a flat out lie.


> Hübner said the city has struggled with LiMux adoption. 'Users were unhappy and software essential for the public sector is mostly only available for Windows,' she said. > She estimated about half of the 800 or so total programs needed don't run on Linux and 'many others need a lot of effort and workarounds'. > Hübner added, 'in the past 15 years, much of our efforts were put into becoming independent from Microsoft,' including spending 'a lot of money looking for workarounds' but 'those efforts eventually failed.'


> https://www.neowin.net/news/munich-germany-realizes-that-dep...

Looks like the open source equivalents for the software didn't deal well enough with different file formats.

techntoke(4131) 1 day ago [-]

Cue a concerted effort by MS salespeople to find their targets to blackmail or pay off to make the decision behind closed doors, and to spam all social media with defeatism using their AI bot farms.

macspoofing(10000) 1 day ago [-]

That's not why Munich moved back to Windows. They moved back because supporting Linux workstations for their workforce was hard and expensive. It wasn't very popular with the end-users either.

dbmueller(10000) 1 day ago [-]

Recently there was a votation in my town, to decide whether to accept a budget for renewing the schools IT infrasture. It included office365 accounts and new computers and tablets (presumably ipads)

I wonder how different it would be were they using an OSS stack. Any stories?

NortySpock(10000) 1 day ago [-]

As I recall, this sort of public statement is used as a lever to squeeze some discounts out of Microsoft's Enterprise Sales team before the government 'decides' to stay with Windows.

Buetol(3635) 1 day ago [-]

As a french, I have to say the french local police (gendarmerie) has been using Ubuntu in all their stations for years and they are very happy with that !

nwah1(3897) 1 day ago [-]

Still means Linux is working as intended. Freedom means options. Options means bargaining power. Bargaining power means monopolies can't push you around as much.

giancarlostoro(3293) 1 day ago [-]

Whats worse is they could all pull it off if they invest in existing Distros and hire people to build out the main infrastructure they need. If they rely on legacy Windowd OS' the licenses for new OS' wont matter. They need secure systems to avoid being hacked by external malicious state actors.

I would love to see some countries adopting Linux for government systems and funding research and development e.g. maybe fund Libre Office or the KDE one more and then build out other tools to be cross platform.

AsyncAwait(4125) 1 day ago [-]

As a Linux aficionado, I have to agree with you, unfortunately. I've seen this too many times over the years not to be cynical about it.

0815test(10000) 1 day ago [-]

Yeah, they're not going to do it. Or do you think they could ever stand to be seen as running the exact same OS as their northern, uh, neighbors? I mean, let's get real.

mtgx(144) 1 day ago [-]

And suddenly three new Microsoft HQs appeared in SK.

macspoofing(10000) 1 day ago [-]

Yep. That's one common outcome. Another one is where they actually move and discover it's a nightmare to train people, and maintain the infrastructure.

pcurve(4091) 1 day ago [-]

With more things running on Web I think they could probably do partial transition of some computers.

robertAngst(3933) 1 day ago [-]

What does 'Linux OS' mean?

Various Linux desktop distros?

If so, have mercy on anyone who will be using Ubuntu Desktop for the first time. I hope they aren't afraid of using Terminal, because the internet will shout them down for wanting a GUI solution.

Love Ubuntu server, but could never get into linux desktop.

Nomentatus(4090) 1 day ago [-]

I've just given up on Linux again. Again. And it's the GUI that makes that decision final. There were also (and I'm still fighting 'em) serious file issues (perhaps related to file name lengths, perhaps not), but these may well have arisen from Windows/Linux interactions or even Windows not getting along with Windows. But flickering and otherwise unusable or unstable GUI features in the distributions finally did me in. Damn. Just plain not ready for prime time. It's not that you couldn't go Linux if you absolutely had to, but the cost is just way too high. (I put the most time in with Ubuntu and ElementaryOS.)

wpietri(3437) 1 day ago [-]

> Love Ubuntu server, but could never get into linux desktop.

It seems fine to me. I'm running it on my laptop right now, and I don't find it significantly more annoying than Windows or MacOS, both of which I've recently used on work machines.

I'm sure there are plenty of people for whom that's not true, especially people who need niche commercial software. But a great deal of what people use is on the web these days anyhow, so I think it matters a lot less what OS Chrome is running on top of.

lytedev(10000) 1 day ago [-]

I imagine it's easier for an IT department to address issues pretty quickly and remotely.

kikoreis(10000) 1 day ago [-]

Come on, this is not accurate. My wife, a journalist employed in the local university and my mother, a shop owner, use Ubuntu every day and have never used the terminal. The demographic that has needs more advanced than the basic desktop functionality will indeed use AskUbuntu and the terminal, but we have to recognize ourselves as niche. Most office workers will use a browser and very little else.

jimmaswell(4116) 1 day ago [-]

The elitism can be absurd. Sometimes I wonder if it's stockholm syndrome. 'You want to configure things with a GUI with discoverability? Not read a dissertation-length man page and edit config files whose syntax changes between versions so you can't even rely on google results? Go back to winbl0ws'

antmanler(10000) 1 day ago [-]

Then run a full linux kernel upon WSL2...

techntoke(4131) 1 day ago [-]

WSL2 isn't out yet, and WSL sucked so much that I imagine WSL 2 won't be much better.

gchamonlive(10000) 1 day ago [-]

What I really don't understand is why commercial stations (like burger king's, ice cream stands etc...) uses windows when all those computers do is show either videos or static images

arjunbajaj(10000) 1 day ago [-]

I've seen Burger King use Linux on their displays in India...

gridlockd(10000) 1 day ago [-]

The correct question is: Why should they use Linux? Historically, Linux struggled to display video.

The companies that make that stuff started out with Windows, they have built a workflow to manage these devices on top of Windows, why switch now? It's not cost effective.

zeusk(10000) 1 day ago [-]

Might not answer you exact question, but I interned with the display team in Windows - they had loads of weird asks for interesting display topologies (and adapter modes) to be supported for media displays.

I'm pretty sure I've never seen anything like that on LKML.

dkns(10000) 1 day ago [-]

My guess would be that it's easier to get tech support for windows than linux.

maximus1983(10000) 1 day ago [-]

It will be either one of these or a combination of these.

* Windows has been approved by management to be the only OS allowed on the computer network * The software for showing the static images / videos and whatever manages it will have been written on Windows or even DOS * The manufacturer of said machines whoever wrote their software was Windows Developer and it is cheaper just to ship a copy of Windows Embedded then to rewrite the software , QA and integrate it.

I have a piece of software that I get small support contracts for. I tell clients it is Windows only. There is nothing stopping me from deploying to Linux or even one of the BSDs (I think). I just can't be bothered to deal with differences in distros, going through a QA process etc when it won't really get me many more sales.

klingonopera(10000) 1 day ago [-]

I'm guessing here: The guys installing those arguably very simple IT-systems mostly couldn't bother with Linux.

Add to that, (at least here in Germany) BK is a franchise that sells licenses to franchise operators, so I doubt they have strict requirements on the actual IT setup, just the pictures they need to display.

So you have very normal, run-off-the-mill managers and IT services, the former unable to grasp that Windows 10 with constant updates and fuck-ups will constantly require the services of the IT guys, and the latter naturally profiting of that. The increased margin of not using Windows wouldn't be enough to compensate that, and an increased price for a Linux package is something very difficult to convince those managers of.

And then also IT-people well versed in Linux usually have clients who pay substantially more than simply outfitting non-IT businesses with IT-equipment.

Grollicus(10000) 1 day ago [-]

Hope that works as well for them as the switch to Linux worked for Munich

beefhash(1180) 1 day ago [-]

Except that Munich switched back[1].

[1] https://www.theregister.co.uk/2017/11/13/munich_committee_sa...

Historical Discussions: Unlimited Google Drive storage by splitting binary files into base64 (May 14, 2019: 580 points)

(580) Unlimited Google Drive storage by splitting binary files into base64

580 points 6 days ago by lordpankake in 3561st position

github.com | Estimated reading time – 4 minutes | comments | anchor

Failed to load latest commit information. .gitignore Removed tests from gitignore 8 hours ago .travis.yml Altered yml after catching error 2 days ago API.py Merge branch 'patch-hash' of https://github.com/SammyHass/uds into pa... 9 hours ago Encoder.py Fixed bug where the verification system in place would break. 9 hours ago FileParts.py Fixed bug where the verification system in place would break. 9 hours ago Format.py Linting and tidying up 2 days ago LICENSE Modularising code 8 months ago README.md Linting and tidying up 2 days ago requirements.txt update progressbar up/download 3 days ago uds.py Removed some clutter 8 hours ago

All Comments: [-] | anchor

blackflame7000(3839) 5 days ago [-]

I wonder if this could be used to create a P2P network like bit torrent except trackers point to blocks at google doc urls instead of peers/seeds

throw4way19(10000) 5 days ago [-]

I discovered that a lot of pirate stream sites are already doing something similar (but not exact) to this.

They store fragments of movies (rather than the full videos) in Google Drive files and then combine them together during playback. Each fragment could then be copied and mirrored across different accounts, so if any are taken down they can just switch to another copy. Pretty clever (albeit abusive) solution for free bandwidth.

brundolf(2854) 5 days ago [-]

I had an (evil; don't do this) idea a while back to create a Dropbox-like program that stores all your data as binary chunks attached to draft emails spread across an arbitrary number of free email accounts.

follower(3701) 5 days ago [-]

Definitely would make an interesting learning exercise--I learned way more about SMTP/POP protocols* than I did before when I implemented demonstration SMTP/POP servers for my libgmail library before Gmail offered alternate means of access.

These days there's even the luxury of IMAP. :D

[*] About the only thing I remember now is the `HELO` and `EHLO` protocol start messages. :)

mmastrac(100) 5 days ago [-]

This existed just after gmail launched. Can't recall the name of the program, but I played around with it to store a few hundred MB in a test account.

derivagral(10000) 5 days ago [-]

Did this as a college project with a friend, was pretty fun.

Nowadays stuff like Dropbox is much more convenient and reliable.

quickthrower2(1505) 5 days ago [-]

I had an evil idea to create a key/value storage using HN dead comments.

MatthewRayfield(4101) 5 days ago [-]

I couldn't find it with a quick search, but I remember many years ago someone creating a similar scheme for storing files inside of TinyURLs.

You would run the uploader and get back a list of TinyURLs that could then be used to retrieve the files later with a downloader.

But you couldn't store too much in each URL so the resulting list could be pretty big.

abricot(4118) 5 days ago [-]

Someone also created a filesystem using DNS caches of others to store the files: https://news.ycombinator.com/item?id=16134041

klyrs(4129) 5 days ago [-]

This is a favorite lunch topic at work. AFAIK we stumbled on the idea ourselves, but I'm not surprised to hear it's unoriginal. Rather than a list, our design is a tree structure where leaf nodes contain data and branch nodes contain lists of tinyurls...

joebergeron(3906) 5 days ago [-]

Me and a friend came up with a similar idea of a sort of distributed file system implemented across a huge array of blog comment sections. Of course you'd need a bunch of replication and fault tolerance and the ability to automatically scrape for new blogs to post spammy-looking comments on, but I thought it was a pretty funny and neat idea when we came up with it.

solotronics(10000) 5 days ago [-]

Even scarier than that would be a Turing complete language where the code is stored and memory is written to comments sections. The actual execution could be done by reading, execution function, and writing comments to store working memory and results. I guess with cryptographic encryption you could even hide what your doing.

jmkni(4009) 5 days ago [-]

I heard about a subreddit a while ago, where every post/comment was a random string. It was speculated at the time that something similar was going on.

pestaa(4116) 5 days ago [-]

The thought of a distributed MySQL cluster accessed over various versions of WordPress-as-a-database-layer just makes me happy and confused.

xamuel(4107) 5 days ago [-]

It's even more interesting to think about this in the context of preserving banned information for future generations. For example, if all the countries in the world united to ban the New Testament. But you eventually realize the ephemeral nature of the net will probably prevent it from fulfilling such long-term data-archiving roles and you're better off burying manuscripts deep underground.

dogma1138(4035) 5 days ago [-]

So UseNet?

baroffoos(10000) 5 days ago [-]

Has anyone actually tried storing a large amount of data like this? I feel like creating a new google account and using it as a backup for a 300gb folder I have.

acuozzo(4117) 4 days ago [-]

Yes. It's called: Post to alt.binaries.* on Usenet.

It's effectively the same thing under the hood. Binaries are split and converted to text using yEnc (or base64, et al.) and uploaded as 'articles'. An XML file containing all of the message-IDs (an 'NZB') is uploaded as well so that the file can be found, downloaded, and reassembled in the right order.

This form of binary distribution has been around since the '80s if you change some of the technical details; e.g. using UUencode rather than yEnc.

Spend $5 for a 3-day unlimited Usenet account with e.g. UsenetServer.com and upload it.

If you want it to stay up, then make another account in 3925 days (the retention period), download it, and then reupload it for another 10+ years of storage.

kevingrahl(4119) 5 days ago [-]

I would not, in any way, consider this a backup.

If it's only 300GB check out Backblaze B2. It would cost you $1.5 per month for that amount of data.

Scaevolus(4121) 5 days ago [-]

You can also just use GSuite with a few users to get unlimited Google Drive storage.


fheld(10000) 5 days ago [-]

relevant video on this unlimited plan and the way it is capped.


tl;dw Upload is limited to 750GB per day per account

judge2020(4112) 5 days ago [-]

That references 'for education', but it's also true for GSuite Business (and enterprise, but not basic). You'll need to be paying for at least 5 users, or $60/month.

unicornfinder(10000) 5 days ago [-]

Yup, I'm storing 42TB on there at present.

gaspoweredcat(4033) 6 days ago [-]

very clever, well done!

ConcernedCoder(10000) 5 days ago [-]

yeah! let's hope no google employees see this ... oh wait

lordpankake(3561) 5 days ago [-]

genuinely sorry if you're a Google employee (probably won't put this on my Internship CV)

purplezooey(10000) 5 days ago [-]

Damn 4:3? That ain't too bad.

quickthrower2(1505) 5 days ago [-]

Base 64 gives you 6 bits per character. Assuming a character requires 8 bits to store eg in UTF8 then yep that's 8:6. Might be better with compression getting you closer to 1:1.

thrownaway954(4129) 5 days ago [-]

Honestly this isn't ground breaking, we have been using BASE64 to convert binary to ASCII as a way of 'sharing' files all the way to USENET days. While applications like these make it easy for the masses to participate in the idea, they don't bring anything new to the table.

That all said, this is really cool from a design perspective and I poured over the code learned a lot.

aembleton(4091) 5 days ago [-]

It's also how email attachments work.

binwiederhier(4104) 5 days ago [-]

In the same spirit, I made a few 'just for fun' plugins for my (now abandoned) encrypted-arbitrary-storage Dropbox-like application Syncany:

The Flickr plugin [1] stores data (deduped and encrypted before upload) as PNG images. This was great because Flickr gave you 1 TB of free image storage. This was actually super cool, because the overhead was really small. No base64.

The SMTP/POP plugin [2] was even nastier. It used SMTP and POP3 to store data in a mailbox. Same for [3], but that used IMAP.

The Picasa plugin [4] encoded data as BMP images. Similar to Flickr, but different image format. No overhead here either.

All of this was strictly for fun of course, but hey it worked.

[1] https://github.com/syncany/syncany-plugin-flickr

[2] http://bazaar.launchpad.net/~binwiederhier/syncany/trunk/fil...

[3] http://bazaar.launchpad.net/~binwiederhier/syncany/trunk/fil...

[4] http://bazaar.launchpad.net/~binwiederhier/syncany/trunk/fil...

collinmanderson(1422) 5 days ago [-]

I have a feeling PNG might work on Google Photos too, but I haven't tried it.

userbinator(908) 5 days ago [-]

Anything that persists can be used to store arbitrary data... I remember (around a decade ago now, I'm not sure if these still exist) coming across some blogs that ostensibly had images of books, details about them, and links to buy them on Amazon and such... I only understood when I came across a forum posting from someone complaining that his ebook searches were clogged with such 'spam blogs', and another poster simply told him to look more carefully at those sites, but not to say anything more about his discoveries. You can probably guess what you got if you saved the surprisingly large 'full-size' cover image from those blogs and opened it in 7zip!

I feel less hesitant about revealing this now, given how long ago it was and that more accessible 'libraries' are now available.

oyebenny(10000) 5 days ago [-]

ELI5 please?

dvhh(4076) 5 days ago [-]

The script split file into small base64 chunks that are stored into 'documents' (mime type: application/vnd.google-apps.document ) that apparently don't count against google drive quota.

justinjlynn(3328) 5 days ago [-]

Sounds like a great way to lose your Google account (and all your other linked Google services) for ToS violations to me.

neltnerb(3901) 5 days ago [-]

Hehe, I was just thinking how simple it will be for Google to identify accounts using this technique from simple usage analytics. I suspect this will not work for long... but still super cool!

kitotik(10000) 5 days ago [-]

Agreed. It also sounds like a great way to (ab)use a dumb commodity via ephemeral Google accounts to distribute data.

m-p-3(10000) 5 days ago [-]

I would make sure to not do this in an important Google account.

mindfulhack(4098) 5 days ago [-]

Indeed. I love the 'sorry @ the guys from google internal forums who are looking at this' line at the github. All tongue in cheek and aware of the situation.

TBH this is not unlike reporting a security bug to a company as a white hat, but more like a grey hat here.

markbnj(4106) 5 days ago [-]

Very neat, but it seems to me the issue with all wink-wink schemes like this is that you're ultimately getting something that wasn't explicitly promised, and so might be taken away at any time. So while interesting you couldn't really ever feel secure storing anything that mattered this way.

blackflame7000(3839) 5 days ago [-]

Yea but you could store unlimited backups across multiple accounts. (Not advocating this however)

yeukhon(2348) 5 days ago [-]

I don't know anyone remember but some years ago I remember seeing a file compressed from 1GB to 1mb. And I was amazed.

jl6(4112) 5 days ago [-]


scarejunba(10000) 5 days ago [-]

On the edonkey network, the file size would be reported raw but the clients could compress and transfer chunks to each other. Some guy had created an empty IL-2 sturmovik iso and seeded it. We lived at a government facility with ill-policed high speed (for the time) internet but even then I knew that I didn't have a 400 Mbps connection. Maybe 2002/2003.

The whole thing only transferred a few kB. It looked like an entire disc though.

dymk(10000) 5 days ago [-]

Maybe I'm missing a reference or joke here, but the size of a file means little with respect to how much it can be compressed. You can get a 1 petabyte file down to a few bytes if it's just `\0` repeated over and over.

userbinator(908) 5 days ago [-]

Base85 would probably be a better choice for storing binary as text, since it has a ratio of 5:4 instead of 4:3.

On the topic of 'unusual and free large file hosting', YouTube would probably be the largest, although you'd need to find a resilient way of encoding the data since their re-encoding processes are lossy.

I like the 'Linux ISO' and '1337 Docs' references ;-)

DonHopkins(3706) 5 days ago [-]

Just use Unicode for the optimal highest possible base 1,114,112!

andrewstuart2(4015) 5 days ago [-]

Why not yEnc? 1-2% overhead and it's been in use on UseNet for binary storage for a very long time.

marquis-chacha(10000) 5 days ago [-]

You'd be at the mercy of them potentially changing their encoding scheme unannounced and corrupting your files.

yunyu(10000) 5 days ago [-]

Here is an implementation of arbitrary data storage using YouTube videos: https://github.com/dzhang314/YouTubeDrive

dspillett(4016) 5 days ago [-]

> Base85 would probably be a better choice

Base64 has the advantage of relative ubiquity (though Base85 is hardly rare, being used in PDF and Git binary patches). It also doesn't contain characters (quotes, angled brackets, ...) that might cause problems if naively sent via some text protocols and/or embedded in XML/HTML mark-up.

> YouTube ... you'd need to find a resilient way of encoding the data [due to lossy re-encoding]

That should be easy enough: encode as blocks or lines of pixels (blocks of 4x4 should be more than sufficient) in a low enough number of colour values (I expect you'd get away with at least 4bits/channel/block with large enough blocks so 4096 values per block) and you should easily be able to survive anything the re-encoding does by averaging each block and taking the closest value to that result.

Add some form of error detection+correction code just for paranoia's sake. You are going to want to include some redundancy in the uploads anyway so you can combine these needs in a manner similar to RAID5/6 or the Parchive format that was (is?) popular on binary carrying Usenet groups.

yalogin(3856) 5 days ago [-]

How is this different from encrypting the binary locally, and store the result as hex strings?

rahimnathwani(2702) 5 days ago [-]

It's 75% more space efficient. And it's automated.

Causality1(10000) 5 days ago [-]

Reminds me of the old programs that would turn your Gmail storage into a network drive by splitting everything into 25MB chunks. Utterly miserable experience with terrible latency and reliability.

qntty(3907) 5 days ago [-]

GMail Drive I believe.


follower(3701) 5 days ago [-]

Yeah, there were a couple of projects that implemented that functionality (mentioned more in my comment https://news.ycombinator.com/item?id=19917018 if you're interested).

Also, 'Utterly miserable experience with terrible latency and reliability.' is such a great customer endorsement quote. :D

jonnycomputer(4122) 5 days ago [-]

Too bad there is no similar trick for atmospheric carbon.

jonnycomputer(4122) 1 day ago [-]

everyone can go fuck off

DonHopkins(3706) 5 days ago [-]

In 1998, the EFF and John Gilmore published the book about 'Deep Crack' called 'Cracking DES: Secrets of Encryption Research, Wiretap Politics, and Chip Design'. But at the time, it would have been illegal to publish the code on a web site, or include a CDROM with the book publishing the 'Deep Crack' DES cracker source code and VHDL in digital form.



>'We would like to publish this book in the same form, but we can't yet, until our court case succeeds in having this research censorship law overturned. Publishing a paper book's exact same information electronically is seriously illegal in the United States, if it contains cryptographic software. Even communicating it privately to a friend or colleague, who happens to not live in the United States, is considered by the government to be illegal in electronic form.'

So to get around the export control laws that prohibited international distribution of DES source code on digital media like CDROMS, but not in written books (thanks to the First Amendment and the Paper Publishing Exception), they developed a system for printing the code and data on paper with checksums, with scripts for scanning, calibrating, validating and correcting the text.

The book had the call to action 'Scan this book!' on the cover (undoubtedly a reference to Abby Hoffman's 'Steal This book').


A large portion of the book included chapter 4, 'Scanning the Source Code' with instructions on scanning the book, and chapters 5, 6, and 7 on 'Software Source Code,' 'Chip Source Code,' and 'Chip Simulator Source Code,' which consisted of pages and pages of listings and uuencoded data, with an inconspicuous column of checksums running down the left edge.

The checksums in the left column of the listings innocuously looked to the casual observer kind of like line numbers, which may have contributed to their true subversive purpose flying under the radar.

Scans of the cover and instructions and test pages for scanning and bootstrapping from Chapter 4:


(My small contribution to the project was coming up with the name 'Deep Crack', which was silkscreened on all of the chips, as a pun on 'Deep Thought' and 'Deep Blue', which was intended to demonstrate that there was a deep crack in the United States Export Control policies.)


The exposition about US export control policies and the solution for working around them that they developed for the book was quite interesting -- I love John Gilmore's attitude, which still rings true today: 'All too often, convincing Congress to violate the Constitution is like convincing a cat to follow a squeaking can opener, but that doesn't excuse the agencies for doing it.'


Chapter 4: Scanning the Source Code

In This chapter:

The Politics of Cryptographic Source Code

The Paper Publishing Exception



The next few chapters of this book contain specially formatted versions of the documents that we wrote to design the DES Cracker. These documents are the primary sources of our research in brute-force cryptanalysis, which other researchers would need in order to duplicate or validate our research results.

The Politics of Cryptographic Source Code

Since we are interested in the rapid progress of the science of cryptography, as well as in educating the public about the benefits and dangers of cryptographic technology, we would have preferred to put all the information in this book on the World Wide Web. There it would be instantly accessible to anyone worldwide who has an interest in learning about cryptography.

Unfortunately the authors live and work in a country whose policies on cryptography have been shaped by decades of a secrecy mentality and covert control. Powerful agencies which depend on wiretapping to do their jobs--as well as to do things that aren't part of their jobs, but which keep them in power--have compromised both the Congress and several Executive Branch agencies. They convinced Congress to pass unconstitutional laws which limit the freedom of researchers--such as ourselves--to publish their work. (All too often, convincing Congress to violate the Constitution is like convincing a cat to follow a squeaking can opener, but that doesn't excuse the agencies for doing it.) They pressured agencies such as the Commerce Department, State Department, and Department of Justice to not only subvert their oaths of office by supporting these unconstitutional laws, but to act as front-men in their repressive censorship scheme, creating unconstitutional regulations and enforcing them against ordinary researchers and authors of software.

The National Security Agency is the main agency involved, though they seem to have recruited the Federal Bureau of Investigation in the last several years. From the outside we can only speculate what pressures they brought to bear on these other parts of the government. The FBI has a long history of illicit wiretapping, followed by use of the information gained for blackmail, including blackmail of Congressmen and Presidents. FBI spokesmen say that was 'the old bad FBI' and that all that stuff has been cleaned up after J. Edgar Hoover died and President Nixon was thrown out of office. But these agencies still do everything in their power to prevent ordinary citizens from being able to examine their activities, e.g. stonewalling those of us who try to use the Freedom of Information Act to find out exactly what they are doing.

Anyway, these agencies influenced laws and regulations which now make it illegal for U.S. crypto researchers to publish their results on the World Wide Web (or elsewhere in electronic form).

The Paper Publishing Exception

Several cryptographers have brought lawsuits against the US Government because their work has been censored by the laws restricting the export of cryptography. (The Electronic Frontier Foundation is sponsoring one of these suits, Bernstein v. Department of Justice, et al ).* One result of bringing these practices under judicial scrutiny is that some of the most egregious past practices have been eliminated.

For example, between the 1970's and early 1990's, NSA actually did threaten people with prosecution if they published certain scientific papers, or put them into libraries. They also had a 'voluntary' censorship scheme for people who were willing to sign up for it. Once they were sued, the Government realized that their chances of losing a court battle over the export controls would be much greater if they continued censoring books, technical papers, and such.

Judges understand books. They understand that when the government denies people the ability to write, distribute, or sell books, there is something very fishy going on. The government might be able to pull the wool over a few judges' eyes about jazzy modern technologies like the Internet, floppy disks, fax machines, telephones, and such. But they are unlikely to fool the judges about whether it's constitutional to jail or punish someone for putting ink onto paper in this free country.

* See http://www.eff.org/pub/Privacy/ITAR_export/Bernstein_case/ .

Therefore, the last serious update of the cryptography export controls (in 1996) made it explicit that these regulations do not attempt to regulate the publication of information in books (or on paper in any format). They waffled by claiming that they 'might' later decide to regulate books--presumably if they won all their court cases -- but in the meantime, the First Amendment of the United States Constitution is still in effect for books, and we are free to publish any kind of cryptographic information in a book. Such as the one in your hand.

Therefore, cryptographic research, which has traditionally been published on paper, shows a trend to continue publishing on paper, while other forms of scientific research are rapidly moving online.

The Electronic Frontier Foundation has always published most of its information electronically. We produce a regular electronic newsletter, communicate with our members and the public largely by electronic mail and telephone, and have built a massive archive of electronically stored information about civil rights and responsibilities, which is published for instant Web or FTP access from anywhere in the world.

We would like to publish this book in the same form, but we can't yet, until our court case succeeds in having this research censorship law overturned. Publishing a paper book's exact same information electronically is seriously illegal in the United States, if it contains cryptographic software. Even communicating it privately to a friend or colleague, who happens to not live in the United States, is considered by the government to be illegal in electronic form.

The US Department of Commerce has officially stated that publishing a World Wide Web page containing links to foreign locations which contain cryptographic software 'is not an export that is subject to the Export Administration Regulations (EAR).'* This makes sense to us--a quick reductio ad absurdum shows that to make a ban on links effective, they would also have to ban the mere mention of foreign Universal Resource Locators. URLs are simple strings of characters, like http://www.eff.org; it's unlikely that any American court would uphold a ban on the mere naming of a location where some piece of information can be found.

Therefore, the Electronic Frontier Foundation is free to publish links to where electronic copies of this book might exist in free countries. If we ever find out about such an overseas electronic version, we will publish such a link to it from the page at http://www.eff.org/pub/Privacy/Crypto_misc/DESCracker/ .

* In the letter at http://samsara.law.cwru.edu/comp_law/jvd/pdj-bxa-gjs070397.h..., which is part of Professor Peter Junger's First Amendment lawsuit over the crypto export control regulations.


aasasd(10000) 5 days ago [-]

Afaik Phil Zimmermann was one of the first to do it, in '95 through MIT Press—when his PGP circulated a bit too widely for the export regulations. However, the question of him being protected under the 1st wasn't decided in the court.

rnhmjoj(2620) 5 days ago [-]

The checksum is really interesting and would be useful even today. I have looked for the scripts but the links are all gone [1], unfortunately.

EDIT: I have found it[2], finally. It's pretty sad that so much of the internet is getting forgotten, though.

[1]: https://web.archive.org/web/19980630210313/http://www.pgpi.c...

[2]: https://the.earth.li/pub/pgp/pgpi/5.5/books/ocr-tools.zip

userbinator(908) 5 days ago [-]

The checksums in the left column of the listings innocuously looked to the casual observer kind of like line numbers, which may have contributed to their true subversive purpose flying under the radar.

Are you implying there's something more interesting there than just the DES source code and related data that the book already very clearly claims to contain?

sagebird(4117) 5 days ago [-]

It seems like a cute and irrelevant distinction that electronic software would be published in a book. If researchers created a computer that processed information using proteins in plant cells instead of electrons, and such a computer could execute programs on this book directly instead of "scanning" it, would not the textbook be software? When laws say "electronic versions" I don't think they literally mean to refer electrons, but rather, computer-consumables/executables.

Was this tested before a court and did they accept this sort of obviously subversive behavior? (Not that I personally agree with the laws restricting crypto export.)

mrmuagi(10000) 5 days ago [-]

Quite an interesting read, however 'Deep Crack' is a horrible name.

mirimir(3338) 5 days ago [-]

Could someone please ELI5 how Google Drive doesn't include text files toward usage?

angelsl(10000) 5 days ago [-]

These aren't text files, but Google Docs files, which Google doesn't count against an account's quota.

reaperducer(3935) 5 days ago [-]

Base64 is such a wonderful gift.

Back when the commercial internet was just getting its act together there were companies that would give you free online access on Windows 3.1 machines in exchange for displaying ads in the e-mail client. (I think one was called Juno.)

The hitch was that you could only use e-mail. No web surfing. No downloading files. No fun stuff.

But that's OK, since there were usenet- and FTP-to-email gateways that you could ping and would happily return lists of files and messages. And if you sent another mail would happily send you base64-encoded versions of those binaries that you could decode on your machine.

The free e-mail service became slow-motion file sharing. But that was OK because you'd set it up before you went to bed and it would run overnight.

Thank you, whomever came up with base64.

jabl(10000) 5 days ago [-]

First time I was able to access the WWW via a graphical browser I had a dial-in shell account at an ISP (or BBS or whatever they called themselves back then), then there was a program called 'slirp' (which, amazingly enough, seems to have a wiki page at https://en.wikipedia.org/wiki/Slirp ) which allowed one to run 'SLIP' (IP-over-serial) over the terminal connection to get IP access from my computer. Amazingly I got it to work, considering I barely knew what I was doing back then.

One big reason why I became a Linux user was that the TCP/IP stack for Win 3.1, Trumpet Winsock, was amazingly unstable and would regularly crash the entire OS. Linux had, even back then, a stable TCP/IP stack. And fantastic advancements like preemptive multitasking running in protected mode so errant user-space applications didn't crash the OS.

Good times.

ohyeshedid(10000) 5 days ago [-]

That's really slick.

The original Juno ad server proxied the ads from the internet to the email client, and the proxy was wide open for several months. The first time I ever accessed the open internet at home was by dialing into the email service and bouncing through the proxy. I believe it was closed due to it being shared in the letters section of a hacker zine.

SergeAx(3969) 5 days ago [-]

Long before base64 there was UUencode, but it was quite sensitive to whitespaces and mail client reflowing, so it didn't make it to the RFC standards.

mmaunder(1410) 5 days ago [-]

Reminds me of usenet warez groups filled with uuencoded posts. If you took the time to reassemble them and decode, it worked.

gjtorikian(3776) 5 days ago [-]

Yep, Juno, NetZero, and Walmart BlueLight were all free ISPs that were super easy to manipulate. :)

groby_b(4081) 5 days ago [-]

Mary Ann Horton[1] is probably one of the people you want to thank. She's responsible for uuencode.

[1] https://en.wikipedia.org/wiki/Mary_Ann_Horton

romanhn(2949) 5 days ago [-]

That reminds me of the first time I accessed World Wide Web. Back in '96 I was browsing a computer magazine and happened upon a listing of useful mailing lists, one of which returned the contents of web pages for a requested HTTP address. Same magazine had an install CD for the free Juno email service.

Being a teenager, the first web page I ever requested was www.doom.com, which returned a gibberish of text to Juno's email client. It was an HTML file full of IMG tags (one of those 'Click here to enter' gateway pages), but I had no idea what I was looking at at the time. Somehow figured out to open the file in IE2 and saw... a bunch of broken images :)

I still vividly remember the sense of wonder that the early Internet evoked.

EDIT: Just checked the Wayback Machine. Looks like www.doom.com was not affiliated with the game at the time, so I must have browsed to www.idsoftware.com instead.

anilakar(10000) 5 days ago [-]

Correction: unlimited as long as it takes Google to fix this oversight in quota calculation.

throwawaygoog10(10000) 5 days ago [-]

It isn't truly an oversight, it's an abuse of the fact that Docs/Sheets/Slides are not counted toward your quota. Their storage model is a little more complicated than a standard stream of bytes like an image or a text file.

whack(3159) 5 days ago [-]

For anyone else who's as confused as I initially was: Google Drive allows unlimited storage for anything stored as 'google docs'. Ie, their version of Word. This hack works by converting your binary files into base64 encoded text, and then storing the text in collection of google-doc files.

Ie, it's actually increasing the amount of storage space needed to store the same binary, but it's getting around the drive-quota by storing it in a format that has no quota.

markstos(10000) 5 days ago [-]

Seems like a good way to earn yourself a Terms-of-Service ban.

If this considered an abuse-of-services now, the terms could be updated to clarify.

The finger print of big chunks of base-64 encoded blobs in Google Docs could be easy to spot.

If Google cares to notice this and take action, they can and will.

jfengel(10000) 5 days ago [-]

I wonder why they do that. It seems to me like it would be more effort to leave the Google Docs files out of their calculation, and with no real benefit. For conventional use of Google Docs it would be hard to use a significant amount of disk space, so it's not like users would be clamoring for additional space.

Perhaps it's just marketing, trying to prize people away from Microsoft Office with a thing that doesn't actually cost them all that much?

robador51(10000) 5 days ago [-]

I may be mistaken, but as far as I'm aware Google docs synced to your local machine are nothing more than links to documents in the Google Drive cloud. None of the data inside those docs is actually stored locally. I found this out the hard way when I decided to move away from GD and lost a lot of files.

So buyer beware I guess.

throwawaygoog10(10000) 5 days ago [-]

Should you want to move from Google services, the best way of ensuring you keep your data is to use Takeout [1], which exports your documents as both doc and html files.

[1] https://takeout.google.com

mikorym(10000) 5 days ago [-]

So, are the +- 700 kB files too small to register as taking up any space?

ptman(4127) 5 days ago [-]

google drive doesn't count docs, spreadsheets or presentations against your quota

lofties(10000) 5 days ago [-]

Very cool! About a year ago I had a similar idea, but to store arbitrary data in PNG chunks[1] and upload them to 'unlimited' image hosts like IMGUR and Reddit.

[1] http://blog.brian.jp/python/png/2016/07/07/file-fun-with-pyh...

collinmanderson(1422) 5 days ago [-]

I have a feeling PNG might work on Google Photos too, but I haven't tried it.

nemosaltat(10000) 5 days ago [-]

Anyone remember Gdrive? I can't find it now, but I think it was probably early or mid 2000s. It let you store files as a local disk (FUSE) via Gmail attachments.

follower(3701) 5 days ago [-]

I do! :)

'Gdrive' (here: http://pramode.net/articles/lfy/fuse/pramode.html ) and the 'Gmail Filesystem'/'GmailFS' (here: https://web.archive.org/web/20060424165737/http://richard.jo... as mentioned elsewhere in this thread) were both built on top of `libgmail` (here: http://libgmail.sourceforge.net/ ) a Python library I developed.

There were a couple of different projects at the time (listed in 'Other Resources' on the project page) that sought to provide a programmatic Gmail interface.

I still have a 'ftp' label in Gmail (checks notes 15 years later...) from the experimental FTP server I implemented as a libgmail example. :D

The libgmail project was probably the first project of mine which attracted significant attention including others basing their projects on it along with mentions in magazines and books which was pretty cool.

I think my favourite memory from the project was when Jon Udell wrote in a InfoWorld column ( http://jonudell.net/udell/2006-02-07-gathering-and-exchangin... ) that he considered libgmail 'a third-party Gmail API that's so nicely done I consider it a work of art.' It's a quality I continue to strive for in APIs/libraries I design these days. :)

(Heh, I'd forgotten he also said 'I think Gmail should hire the libgmail team, make libgmail an officially supported API'--as the entirety of the 'team' I appreciated the endorsement. :) )

The library saw sufficient use that it was also my first experience of trying to plot a path for maintainership transition in a Free/Libre/Open Source licensed project. I tried to strike a balance between a sense of responsibility to existing people using the project and trusting potential new maintainers enough to pass the project on to them. Looking back I felt I could've done a better job of the latter but, you know, learning experiences. :)

My experiences related to AJAX reverse engineering of Gmail (which was probably the first high profile AJAX-powered site) later led to reverse engineering of Google Maps when it was released and creating an unofficial Google Maps API before the official API was released: http://libgmail.sourceforge.net/googlemaps.html

But that's a whole other story... :)

imglorp(3760) 5 days ago [-]

Yeah now look for https://github.com/vitalif/grive2

AFAIK it mostly still works. The older 'grive' might not.

BlackRing(10000) 5 days ago [-]

I remember using it back in 2005 iirc, and it was amazing. The files had a label called gmailfs.gDisk which is how it could keep the 'file system' separate from the rest.

Now Google generously offers Drive with 15Gigs of space.

furyofantares(4078) 5 days ago [-]

Even URL shorteners offer unlimited storage if you jump through enough hoops.

To encode ABCDEFGHIJKLMNOPQRSTUVWXYZ first get a short url for http://example.com/ABC, then take the resulting url and append DEF and run it through the service again. Repeat until you run out of payload, presumably doing quite a few more than 3 bytes at a time.

The final short url is the your link to the data, which can be unpacked by stripping the payload bytes then following the links backwards until you get to your initial example.com node.

jerf(3287) 5 days ago [-]

I've lost track of the number of times I've seen variants on 'Hey, a link shortener is a fun first project for this new language I'm learning; hey, $LANGUAGE_COMMUNITY, I've put this up on the internet now!... hey, uh, $LANGUAGE_COMMUNITY, I've had to take it down due to abuse.' There are numerous abuse vectors. Optionally promise to get it back up real soon now, as if there are actually people depending on it.

Maybe it isn't a bad first project, but on no account should you put it up on the 'real' internet and tell anyone it exists.

reneberlin(10000) 6 days ago [-]

I think in the long run, the user could risk the complete google-account if they begin rating the uploads a violation of TOS.

I advise a totally seperate account when using this tool.

But anyways, something inside me likes it. Nicely done. Good job :)

mo5(10000) 5 days ago [-]

I'm kinda scared to try it since google could mass ban all of the accounts if they want to, but sure is a great job from the dev.

I didn't know this was plausible.

lordpankake(3561) 5 days ago [-]

This is a complete hack job and probably useless if Google changes free storage for docs.

That being said, they currently allow the guys at /r/datahoarder to use gsuite accounts costing £1 for life with unlimited storage quotas. These are regularly filled to like 50TB and Google doesn't bat an eye.

asdfasgasdgasdg(10000) 5 days ago [-]

At least a totally separate account. Probably better to use a totally separate set of IP addresses and browsers and maybe even computers. Google will definitely link accounts created from the same browser and potentially ban your main account if you violate their TOS on another account also owned by you.

zelon88(3932) 5 days ago [-]

Couldn't you also embed data into images and upload them to Google photos, or is that discarded when they convert and compress the image in the backend?

PetahNZ(10000) 5 days ago [-]

Depends how you encode it. A bunch QR codes, no problem. But encoding into the individual pixel, probably not so much.

lordpankake(3561) 5 days ago [-]

Check the issues! Some people have tried quite hard to figure that one out.

irrational(10000) 5 days ago [-]

I've wondered if someone could do the same thing with videos and jpgs. Amazon prime, as one example, allows you to store an unlimited number of image files for 'free'. What if there was a program that would take a video file and split it up into its individual frames as jpgs and stored them on Amazon prime. When you wanted to watch the video the program would rebuild the video file from the individual jpgs on AWS.

michaelmior(3769) 5 days ago [-]

My guess would be that the latency of this approach would be far too high to be practical. But you could probably abuse the JPEG format to stuff bits of the video into image files. I think you'd probably still need to spend a fair amount of time buffering before you could start watching without lag.

wmichelin(10000) 5 days ago [-]

Someone is going to notice a few accounts with insanely high storage usage, and then comes the ban-hammer. Enjoy losing your Google account!

markbnj(4106) 5 days ago [-]

From the project page:

> sorry @ the guys from google internal forums who are looking at this

sircastor(10000) 5 days ago [-]

I think that depends on their tools and how they evaluate data usage. If the reporting states that the accounts are using very little storage because it's using the same measuring stick that the client does them it's invisible. The question comes up during an audit of the system when the disk usage doesn't match the report. Then again, if this is used by few people it may just look like a margin of error.

Historical Discussions: Things to use in Python 3 (May 15, 2019: 572 points)
Things you're probably not using in Python 3 – but should (May 07, 2019: 6 points)

(572) Things to use in Python 3

572 points 5 days ago by ausjke in 640th position

datawhatnow.com | Estimated reading time – 8 minutes | comments | anchor

Many people started switching their Python versions from 2 to 3 as a result of Python EOL. Unfortunately, most Python 3 I find still looks like Python 2, but with parentheses (even I am guilty of that in my code examples in previous posts – Introduction to web scraping with Python). Below, I show some examples of exciting features you can only use in Python 3 in the hopes that it will make solving your problems with Python easier.

All the examples are written in Python 3.7 and each feature contains the minimum required version of Python for that feature.

f-strings (3.6+)

It is difficult to do anything without strings in any programming language and in order to stay sane, you want to have a structured way to work with strings. Most people using Python prefer using the format method.

user = 'Jane Doe'
action = 'buy'
log_message = 'User {} has logged in and did an action {}.'.format(
# User Jane Doe has logged in and did an action buy.

Alongside of format, Python 3 offers a flexible way to do string interpolation via f-strings. The same code as above using f-strings looks like this:

user = 'Jane Doe'
action = 'buy'
log_message = f'User {user} has logged in and did an action {action}.'
# User Jane Doe has logged in and did an action buy.

Pathlib (3.4+)

f-strings are amazing, but some strings like file paths have their own libraries which make their manipulation even easier. Python 3 offers pathlib as a convenient abstraction for working with file paths. If you are not sure why you should be using pathlib, try reading this excellent post – Why you should be using pathlib – by Trey Hunner.

from pathlib import Path
root = Path('post_sub_folder')
# post_sub_folder
path = root / 'happy_user'
# Make the path absolute
# /home/weenkus/Workspace/Projects/DataWhatNow-Codes/how_your_python3_should_look_like/post_sub_folder/happy_user

Type hinting (3.5+)

Static vs dynamic typing is a spicy topic in software engineering and almost everyone has an opinion on it. I will let the reader decide when they should write types, but I think you should at least know that Python 3 supports type hints.

def sentence_has_animal(sentence: str) -> bool:
  return 'animal' in sentence
sentence_has_animal('Donald had a farm without animals')
# True

Enumerations (3.4+)

Python 3 supports an easy way to write enumerations through the Enum class. Enums are a convenient way to encapsulate lists of constants so they are not randomly located all over your code without much structure.

from enum import Enum, auto
class Monster(Enum):
    ZOMBIE = auto()
    WARRIOR = auto()
    BEAR = auto()
# Monster.ZOMBIE

An enumeration is a set of symbolic names (members) bound to unique, constant values. Within an enumeration, the members can be compared by identity, and the enumeration itself can be iterated over.

for monster in Monster:
# Monster.ZOMBIE
# Monster.WARRIOR
# Monster.BEAR

Built-in LRU cache (3.2+)

Caches are present in almost any horizontal slice of the software and hardware we use today. Python 3 makes using them very simple by exposing an LRU (Least Recently Used) cache as a decorator called lru_cache.

Below is a simple Fibonacci function that we know will benefit from caching because it does the same work multiple times through a recursion.

import time
def fib(number: int) -> int:
    if number == 0: return 0
    if number == 1: return 1
    return fib(number-1) + fib(number-2)
start = time.time()
print(f'Duration: {time.time() - start}s')
# Duration: 30.684099674224854s

Now we can use the lru_cache to optimize it (this optimization technique is called memoization). The execution time goes down from seconds to nanoseconds.

from functools import lru_cache
def fib_memoization(number: int) -> int:
    if number == 0: return 0
    if number == 1: return 1
    return fib_memoization(number-1) + fib_memoization(number-2)
start = time.time()
print(f'Duration: {time.time() - start}s')
# Duration: 6.866455078125e-05s

Extended iterable unpacking (3.0+)

I will let the code speak here (docs).

head, *body, tail = range(5)
print(head, body, tail)
# 0 [1, 2, 3] 4
py, filename, *cmds = 'python3.7 script.py -n 5 -l 15'.split()
# python3.7
# script.py
# ['-n', '5', '-l', '15']
first, _, third, *_ = range(10)
print(first, third)
# 0 2

Data classes (3.7+)

Python 3 introduces data classes which do not have many restrictions and can be used to reduce boilerplate code because the decorator auto-generates special methods, such as __init__() and __repr()__. From the official proposal, they are described as "mutable named tuples with defaults".

class Armor:
    def __init__(self, armor: float, description: str, level: int = 1):
        self.armor = armor
        self.level = level
        self.description = description
    def power(self) -> float:
        return self.armor * self.level
armor = Armor(5.2, 'Common armor.', 2)
# 10.4
# <__main__.Armor object at 0x7fc4800e2cf8>

The same implementation of Armor using data classes.

from dataclasses import dataclass
class Armor:
    armor: float
    description: str
    level: int = 1
    def power(self) -> float:
        return self.armor * self.level
armor = Armor(5.2, 'Common armor.', 2)
# 10.4
# Armor(armor=5.2, description='Common armor.', level=2)

Implicit namespace packages (3.3+)

One way to structure Python code is in packages (folders with an __init__.py file). The example below is given by the official Python documentation.

sound/                          Top-level package
      __init__.py               Initialize the sound package
      formats/                  Subpackage for file format conversions
      effects/                  Subpackage for sound effects
      filters/                  Subpackage for filters

In Python 2, every folder above had to have an __init__.py file which turned that folder into a Python package. In Python 3, with the introduction of Implicit Namespace Packages, these files are no longer required.

sound/                          Top-level package
      __init__.py               Initialize the sound package
      formats/                  Subpackage for file format conversions
      effects/                  Subpackage for sound effects
      filters/                  Subpackage for filters

EDIT: as some people have said, this is not as simple as I pointed it out in this section, from the official PEP 420 Specification__init__.py is still required for regular packages, dropping it from the folder structure will turn it into a native namespace package which comes with additional restrictions, the official docs on native namespace packages show a great example of this, as well as naming all the restrictions.

Closing note

Like almost any list on the internet, this one is not complete. I hope this post has shown you at least one Python 3 functionality you did not know existed before, and that it will help you write cleaner and more intuitive code. As always, all the code can be found on GitHub.

All Comments: [-] | anchor

purplezooey(10000) 5 days ago [-]

I know types are spicy. But F them. I hate this. If I wanted all this junk I'd stick to C++.

nurettin(10000) 5 days ago [-]

Are you saying that you exited the C++ scene because dealing with types is tedious? For me, C++ career opportunities were extremely rare to come by and I never had the luxury to change jobs because I didn't like the language in some way.

Some people have it too easy.

dang(172) 5 days ago [-]

Could you please stop posting unsubstantive comments to Hacker News?

Grue3(10000) 5 days ago [-]


What happened to 'explicit is better than implicit'? 'Only one way to do stuff'? One of the more baffling Python features if you ask me.

sametmax(3835) 5 days ago [-]

This debate already took place. Go back to the python-idea mailing list posts about f-string for a complete back and forth on your point.

shuzchen(3659) 5 days ago [-]

You seem to have forgotten 'practicality beats purity'

qwsxyh(10000) 5 days ago [-]

f-strings are not the same way to do things but a different way. You can't explode a dictionary into an f-string to fill in the fields, for example, but you can in .format().

paganel(2546) 5 days ago [-]

Exactly, the f-string syntax looks so close to PHP's way of handling strings that I'm really amazed that it managed to pass and be implemented. I mean, I don't see that much of a difference between

> name = 'Jane'

> print(f"Hello, {name}")

and PHP's:

> $name = "Jane";

> echo "Hello, $name";

whalesalad(343) 5 days ago [-]

f-strings is the right way to do this going forward, the rest will die off.

dorfsmay(3309) 5 days ago [-]

How do you define class variable in a data class?

nerdwaller(10000) 5 days ago [-]

You can use `typing.ClassVar[<datatype>]`

fulafel(3402) 5 days ago [-]

Originally small dynamic languages like Python and Javascript are constantly growing new language/stdlib features but rarely taking anything away. I wonder how this affects learnability, how long will it take for a programming newbie to be able to jump into an existing project or read other people's code. I guess this could be measured and studied, I wonder if someone has done it.

I think Python was probably at a pretty good local sweet spot in the complexity:power ratio around the 2.0 version.

sametmax(3835) 5 days ago [-]

Python has taken many things away, hence the 2 to 3 transition. And people didn't like it. Not one bit. We even had to bring features back (u'', %, etc) after taking them away.

Now I agree, if I could snap my gantlet I would remove a lot more. We don't need Template(), the wave module or @static and so many other things. And we could have cleaned up Python more deeply in the transition.

But reality is very messy.

Plus, you gotta realize that the Python community has nowhere near the resources of languages such as Javascript. JS is developed by giants that poured hundreds of millions in it and set dozens of geniuses to work solely on improving the implementations.

Python barely (and this is recent, we had less before!) has a few millions for operating the PSF related activities, which includes organizing PyCons all over the world and hosting python.org or pypi.org. Only a handful of people are actually paid to work on Python.

So you have a giant, diverse community, with billions of lines of code for so many different activities, and not enough resources to work on cleaning up everything.

Welcome to life, this stuff where if you want to get things done, you have to compromise.

yen223(3952) 5 days ago [-]

Python is definitely heading down that dark path.

I wonder how many pythonistas are fully across all of the new syntax coming, like positional-only args or even the infamous assignment expression.

zild3d(10000) 5 days ago [-]

brings to mind C++. If you learned it 15 years ago and haven't kept up, new C++ will look like a new language.

Don't think the same is true for python (yet?)

alexgmcm(10000) 5 days ago [-]

I just want to thank the author for not using Medium.

Far too many tech blog posts use that platform now and I don't like it very at all as it feels really bloated and I can never be sure if what I'm about to click is a 'premium' Medium post or not.

dna_polymerase(3336) 5 days ago [-]

Wow, Medium built such a dysfunctional webservice that people start actively thanking others not to use it - in completely unrelated posts. Nice job.

rhizome31(3945) 5 days ago [-]

Glad to see I'm not the only one thinking like this. Medium is the MySpace of the late 2010s.

Good blog post by the way.

skocznymroczny(10000) 5 days ago [-]

I like that the article doesn't have generic meme images separating each paragraph.

maratumba(10000) 5 days ago [-]

Been enjoying the Make Medium Readable Again (https://chrome.google.com/webstore/detail/make-medium-readab...) plugin. Removes all the clutter from the page, you won't even know you are on Medium!

baseballMan(10000) 5 days ago [-]

I'm by no means a hardcore python dev but I feel like these are all pretty basic, common things. No?

nerdbaggy(4131) 5 days ago [-]

I don't think an LRU cache is basic

dagw(3422) 5 days ago [-]

I've been programming python, on and off, since 2001, and I still learnt a couple of things.

whoevercares(10000) 5 days ago [-]

We probably should compare the timeline when those features are added to other languages

jedberg(2207) 5 days ago [-]

They're basic but also things you'd never look up, especially if you've been doing python a long time.

For example, I just learned about f strings, because why would I need to ever look up if there is a replacement for .format()?

And the LRU cache. I've been hand rolling that for years, but I never thought to see if they had added it to the standard library, because why would I look?

iandanforth(3917) 5 days ago [-]

TIL - Enum, auto(), @dataclass, and that classes don't need (object) anymore. Neat!

DonaldPShimoda(10000) 5 days ago [-]

EDIT: nope, nope, this is all wrong. My mistake. Ignore everything else here.

> classes don't need (object) anymore

My friend, old-style classes became unnecessary with the release of Python 2.2 — almost 20 years ago now [0]. I'm afraid you've been living in the past. :)

[0] https://wiki.python.org/moin/NewClassVsClassicClass

nerdwaller(10000) 5 days ago [-]

A funky thing about the Enum class is you may also want the `@enum.unique` decorator in most cases, which enforces the enum/value pair being unique. I found the documentation a little misleading until I found that [0].

[0] https://docs.python.org/3/library/enum.html#enum.unique

sametmax(3835) 5 days ago [-]

I'm training people in Python regularly and my classroom experience is: ALWAYS USE __INIT__.PY EVERYWHERE. Don't use this namespace feature. You think you understand how it works, but you don't. It will bite you.

simonh(10000) 5 days ago [-]

I just read the docs on this, but don't really grok the pros-and-cons. Can you give an example of a gotcha? That would be very helpful.

scrollaway(2977) 5 days ago [-]

Amen. Much as I wish python importing was as powerful as js, that ship has sailed and we're nowhere close to getting rid of the dunder-init file.

Also pathlib is .. urgh. This is one of those that shouldn't be in the stdlib. I don't get why it is, but requests isn't.

k4ch0w(4121) 5 days ago [-]

F strings have been amazing in my everyday programming. I have been programming in Python for about 8 years now. I'd say some additions to this list are

  Conda for managing environments and Deep Learning builds such as Cuda on Ubuntu
  Jupyter notebooks for quick prototyping
  IPython if you have never seen it before.
  Requests Library
sametmax(3835) 5 days ago [-]

Despite being so cool, it's also a very under utilized feature, as it inherits all the formatting capabilities of format().

With format(), and hence with f-string you can:

# Replace datetime.stftime()

>>> f'{datetime.now():%m/%d/%y}'


# Show numbers in another base

>>> f'{878:b}'


>>> f'{878:x}'


>>> f'{878:e}'


>>> f'{878:o}'


# Control the display of padding, filling, and precision of numbers

>>> 1 / 3


>>> f'{1 / 3:.2f}'


>>> f'{69:4d}'

' 69'

>>> f'{69:=+8d}' '+ 69'

>>> f'{69:0=8d}'


# Center text

>>> f'{'foo':^10}'

' foo '

And so many other things, since you can create your custom __format__.

Not affiliated, but in this regard, I quite love https://pyformat.info/ as a format cheat sheet.

nerdbaggy(4131) 5 days ago [-]

Before I knew about F strings my .formats() were just so long. F strings are nice for sure

pfranz(10000) 5 days ago [-]

I had a co-worker who like the convention:

  >>> 'User {user} has logged in and did an action {action}.'.format(**locals())
  'User Jane Doe has logged in and did an action buy.'
I thought it was clever at first, but noticed it was frustrating to refactor because it was hard to see the scope of variables (especially if the template string is defined elsewhere). I don't lean heavily on IDEs, but at the time it would flag many required variables as unused because it couldn't grok this.

Do you find the same problems when using f strings?

AtHeartEngineer(10000) 5 days ago [-]

Conda with pipenv = great

ehsankia(10000) 5 days ago [-]

They just added a brand new feature in 3.8 that let's you just append = at the end for quick debugging:

>>> f'{5+5=}'


>>> foo = 5

>>> f'{foo=}'



IanCal(4101) 5 days ago [-]

Beware lru_cache and mutable objects. I'm on a phone so this will probably contain an error but:

    def slow():
        return {'result': 'immutable?}
    answer = slow()
    answer['result'] = 'no'
IanCal(4101) 5 days ago [-]

Can't edit now, but here's a runnable gist https://gist.github.com/IanCal/d703106344e876728d2aaef67800d...

andreareina(10000) 5 days ago [-]

It also won't let you cache on mutable values, so you can't pass in lists or dicts. I've used it on occasion, a lot of the time I ended up rolling my own instead. Usually without expiration since I didn't need it.

Areading314(10000) 5 days ago [-]

I would add to this list:

  * generators
  * defaultdict
  * generators
  * sets
mythrwy(10000) 5 days ago [-]

Generators and sets have been around for a long time I think.

They were there in 2.X version like 9-10 years ago when I first used Python anyway and aren't 3 specific.

rcfox(1693) 5 days ago [-]

SimpleNamespace[0] is pretty handy too. It essentially lets you wrap a dictionary and access its keys as member properties.

Also, it's kind of weird that the article would mention type annotations and then not mention mypy[1].

[0] https://docs.python.org/3/library/types.html?highlight=types...

[1] http://mypy-lang.org/

DonaldPShimoda(10000) 5 days ago [-]

Whoa, I've never seen the `types` module before! This will actually be very handy for me for what I've been working on lately. Thanks for pointing it out!

rattray(3995) 5 days ago [-]

Curious to see a small code sample illustrating typical usage of SimpleNamespace if you care to share? Docs didn't include one.

reacharavindh(2991) 5 days ago [-]

I was curious to see how asyncio has matured into such a list.. but it was missing the list. Is it common enough for people to use in scripts to speed up concurrent work yet?

dagw(3422) 5 days ago [-]

Anecdotally, no. From where I sit, the multiprocessing and threading modules still seem to be the main way to to concurrency in python.

DonaldPShimoda(10000) 5 days ago [-]

> Type hinting (3.5+)

> Static vs dynamic typing is a spicy topic in software engineering and almost everyone has an opinion on it. I will let the reader decide when they should write types, but I think you should at least know that Python 3 supports type hints.

I think this gives readers the impression that type hints have anything to do with dynamic vs. static type systems, which isn't true. These are merely annotations that are attached to values. In fact, it's legal to use full strings as type annotations — they have no inherent semantic value.

You can gain semantic value by using a tool like MyPy or an IDE like PyCharm, but the type annotations do not in themselves do anything. I think it could be worth clarifying this for readers who are unaware.

xjlin0(10000) 5 days ago [-]

How to do Type hinting with argument default values?

BlackFly(10000) 5 days ago [-]

They are available in the runtime, so you can do runtime reflection on the annotations and perform dependency injection if that is your thing. In that sense they do something: they provide information to the runtime which you can use if you want.

mrfusion(760) 5 days ago [-]

Are there any tools that can use the annotations and watch a running script and warn you if they're violated? Maybe during your test suite.

Or linters could do some basic analysis.

xtf(10000) 5 days ago [-]

Except further optimisation in Cython like http://docs.cython.org/en/latest/src/tutorial/pure.html#stat...

BerislavLopac(187) 5 days ago [-]

The most underrated part of type annotations are so-called stub files [0], which allow you to keep your annotations separate from the actual code, thus reducing its size while still benefiting from the static analysis with mypy.

Unfortunately the IDE support for those files (.pyi extension) is quite limited -- I have even suggested to the PyCharm team to implement an option to automatically save all annotations to stub files, while still displaying them alongside the code in the editor.

[0] https://www.python.org/dev/peps/pep-0484/#stub-files

bitcurious(3650) 5 days ago [-]

What's the best practice for type hinting a class that's not being imported?

I.e. A module contains a function that accepts a pandas.DataFrame. This module doesn't import pandas.

ehsankia(10000) 5 days ago [-]

Yeah, I'm pretty sure Python itself explicitly uses the term 'annotations' everywhere instead of 'type hints'. The point is that they can be used for anything, typing being one such use.

As for libraries, other than MyPy, there's also Pyre by Facebook and Pytype by Google. I'm definitely excited to see more advanced static analysis being implemented beyond simple type checking. Some of these libraries are starting to explore that.

jedberg(2207) 5 days ago [-]

Wow, today I learned about f strings. That will make my code so much more readable! And no more annoying bugs because I skipped one of the seven references in the string!

mixmastamyk(3422) 5 days ago [-]

It reads like sarcasm, but then you drop a real benefit. ((Confusion by New Order playing in the background))

speedplane(4081) 5 days ago [-]

There's no reason why Python 2 can't have the vast majority of these features. It's a shame that the Python continues to ignore the reality of the mess they made with Python 3, and failing to question what was an originally bad idea.

nurettin(10000) 5 days ago [-]

There is a reason to get rid of bloat. It makes for a more orthogonal language. You don't have randomly sprinkled syntax for printing characters, raising exceptions, catching exceptions, handling strings, implicit type conversions when comparing values, etc which were bad ideas and good riddance.

nerdwaller(10000) 5 days ago [-]

Guido had a good retrospective on the shift, while the change was hard and painful, it needed to happen for the language to be easier to work with longer term.

If you're interested it's worth a watch: https://www.youtube.com/watch?v=Oiw23yfqQy8

xxpor(3985) 5 days ago [-]

This question is done and dusted. There's literally no point in discussing it any more. CPython 2.7 support ends in 7 months. It's time to move on.

DonaldPShimoda(10000) 5 days ago [-]

No, there's no reason Python 2 can't have these features, but there's no particularly compelling reason to invest the engineering effort to develop it.

> the mess they made with Python 3

What mess, exactly?

> what was an originally bad idea.

What makes Python 3 a 'bad idea'?

auscompgeek(10000) 5 days ago [-]

In case anyone is stuck on older versions of Python for any reason, here are some backports of some of the listed features:

* f-strings: https://github.com/asottile/future-fstrings (this is a pretty crazy hack, all the way to Python 2)

* pathlib: https://pypi.org/project/pathlib2/

* typing: https://pypi.org/project/typing/

* enum: https://pypi.org/project/enum34/ (doesn't include any additions in 3.5+)

* functools.lru_cache: https://github.com/jaraco/backports.functools_lru_cache

* dataclasses: https://github.com/ericvsmith/dataclasses (3.6 only, requires ordered class __annotations__)

I'm half-expecting someone to come along with a hacked-up Python 2 backport for implicit namespace packages :)

anentropic(10000) 5 days ago [-]

I was using enum34... then one day I wanted to use a Flag and found it wasn't in there

the same author has another library: https://pypi.org/project/aenum/ which includes the latest enum stuff from 3.5 (plus some other things)

isaacremuant(10000) 5 days ago [-]

With that in mind, the end for Python 2 is less than 8 months away.

Something fun to help people feel the 'impending doom' and port their projects:


gbajson(10000) 5 days ago [-]

The example of f-string in the article is quite unfortunate, it suggests using it for logging messages. F-strings shouldn't be used in loggers, because they aren't lazy evaluated.

bonoboTP(10000) 5 days ago [-]

What instead?

masklinn(3521) 5 days ago [-]

TBF for the vast majority of logging it doesn't really matter. Not to mention the laziness is only the (usually cheap) formatting itself. If you're logging the result of non-trivial expressions (which would otherwise not be computed at all) you have to handroll it.

f-strings are mostly unusable for translatable contexts. And format-strings also don't work then (they're a security issue).

tasubotadas(4113) 5 days ago [-]

A cookie for this man!

croh(4130) 5 days ago [-]

1. f-string and extended-iterable-unpacking reminds ruby. i was really missing them.

2. pathlib is cool. will come pretty handy.

3. Data classes sounds good but there should be only one way to do things, even you end up writing few more lines.

4. type-hints are ok for very very big projects.

5. No comment on implicit-namespace-packages yet as still trying to understand solid use-case.

DonaldPShimoda(10000) 5 days ago [-]

> Data classes sounds good but there should be only one way to do things

Dataclasses greatly simplify generation of simple classes which are unnecessarily filled with boilerplate. I'd argue that they are the one way to do this, and that choosing to manually implement the same naive `__init__` function for every simple class you write is the 'wrong' way.

(Here I take 'simple class' to mean a class which takes in some values during initialization and stores them without anything terribly interesting going on during that initialization. A simple class can have methods defined on it, though.)

> type-hints are ok for very very big projects.


> No comment on implicit-namespace-packages yet as still trying to understand solid use-case.

I think it's just... trying to keep your directories 'clean'? I also don't really understand the use of this haha.

baq(3406) 5 days ago [-]

> there should be only one way to do things, even you end up writing few more lines.

the zen of python never mentioned that you should exclusively code in machine code.

dfinninger(10000) 5 days ago [-]

> 4. type-hints are ok for very very big projects.

Great for smaller ones too! I type annotate every function, even

    def parse_args() -> argparse.Namespace:
at the top of simple scripts. With a good IDE (VS Code fits here too), the extra language lookups/insight you get are awesome. Command+mouse_hover gives me a great bit of information about args passed into the function if I annotate the inputs.
dopeboy(2792) 5 days ago [-]

Is type hinting being adopted by Pythonistas today? I've been aware of the feature but haven't used it. To be a modern Python programmer, should I start using it?

DonaldPShimoda(10000) 5 days ago [-]

I use it everywhere because it vastly improves code readability/usability in my IDE (PyCharm). But I don't know if I'm representative of the general Python population.

teekert(4113) 5 days ago [-]

I read quite some things about the data class... in short: If I use a lot of pandas.DataFrames, is this something I should learn about?

krapht(4124) 5 days ago [-]

No. Data classes are a mutable replacement for collections.namedtuple.

brootstrap(10000) 5 days ago [-]

Not sure anyone will see this but this article is partly trash. When your post is titled 'things you are not using in python3 but probably should' and the first section is on f-strings... I was tempted to close the tab then , but i followed thru and did find some interesting things.

But seriously f-strings? people have been talking about these for years now, it's not new is it?

DonaldPShimoda(10000) 5 days ago [-]

If you look through the other comments here, you'll find plenty of people who didn't know about f-strings prior to this article.

jwilk(3581) 4 days ago [-]

f-strings were added in Python 3.6, released in December 2016.

All the other features mentioned in the article, except data classes, are older than that.

j88439h84(10000) 5 days ago [-]

The 'Implicit namespace packages' item is not correct. Those are for a a specific packaging scenario which doesn't come up in normal usage. Packages should have an __init__.py file in general.

giancarlostoro(3293) 5 days ago [-]

If the file is going to be empty I fail to justify the effort. It is implicitly a package. Is there a case where an empty init file makes sense in Python 3?

jsmeaton(2825) 5 days ago [-]

Thanks, this was the only item I wasn't sure about, and reading PEP 420 didn't really make it clear how this would benefit regular applications.

Historical Discussions: AT&T promised 7k new jobs to get tax break, cut 23k jobs instead (May 14, 2019: 555 points)

(555) AT&T promised 7k new jobs to get tax break, cut 23k jobs instead

555 points 6 days ago by JaimeThompson in 3439th position

arstechnica.com | Estimated reading time – 6 minutes | comments | anchor

Enlarge / AT&T CEO Randall Stephenson at the World Economic Forum (WEF) in Davos, Switzerland, on Wednesday, Jan. 22, 2014.

Getty Images | Bloomberg

AT&T has cut more than 23,000 jobs since receiving a big tax cut at the end of 2017, despite lobbying heavily for the tax cut by claiming that it would create thousands of jobs.

AT&T in November 2017 pushed for the corporate tax cut by promising to invest an additional $1 billion in 2018, with CEO Randall Stephenson saying that 'every billion dollars AT&T invests is 7,000 hard-hat jobs. These are not entry-level jobs. These are 7,000 jobs of people putting fiber in ground, hard-hat jobs that make $70,000 to $80,000 per year.'

The corporate tax cut was subsequently passed by Congress and signed into law by President Trump on December 22, 2017. The tax cut reportedly gave AT&T an extra $3 billion in cash in 2018.

But AT&T cut capital spending and kept laying people off after the tax cut. A union analysis of AT&T's publicly available financial statements 'shows the telecom company eliminated 23,328 jobs since the Tax Cut and Jobs Act passed in late 2017, including nearly 6,000 in the first quarter of 2019,' the Communications Workers of America (CWA) said yesterday.

AT&T's total employment was 254,000 as of December 31, 2017 and rose to 262,290 by March 31, 2019. But AT&T's overall workforce increased only because of its acquisition of Time Warner Inc. and two smaller companies, which together added 31,618 employees during 2018, according to an AT&T proxy statement cited in the CWA report.

Excluding employees gained via mergers, AT&T's workforce dropped from 254,000 to 230,672, a cut of 23,328 jobs, the CWA report points out. These numbers are for AT&T's global workforce, but the vast majority of its employees are in the US. AT&T reported having 44,892 non-US employees as of October 1, 2018.

The most recent layoffs affected 368 union technicians in California, the CWA said last week.

AT&T also cut more than 10,000 jobs each year in 2016 and 2017. AT&T had 281,450 employees as of December 31, 2015, 268,540 as of December 31, 2016, and 254,000 by the end of 2017.

AT&T slashed capital spending, too

'AT&T's annual report also shows the company boosted executive pay and suggests that after refunds, it paid no cash income taxes in 2018 and slashed capital investments by $1.4 billion,' the CWA wrote.

AT&T reported $21.6 billion in capital expenses in 2017 and $21.3 billion in 2018, a cut of $300 million. CWA told Ars that the cut is $1.4 billion when 'excluding federal government reimbursements for the construction of FirstNet,' AT&T's government-funded public safety network.

AT&T capital spending is already down more than $900 million this year, as the telco reported Q1 2019 capital expenditures of $5.18 billion, down from $6.12 billion in Q2 2018.

'What AT&T is doing to hardworking people across America is disgraceful,' CWA President Chris Shelton said in the union announcement. 'Congress needs to investigate AT&T to find out how it is using its tax windfall since the company's own publicly available data already raise serious alarm bells. AT&T got its tax cut. Where are the jobs?'

AT&T's actual capital spending of $21.3 billion in 2018 is far short of what AT&T told investors to expect at the beginning of 2018, when it said that full-year capital spending would 'approach' $25 billion and be '$23 billion net of expected FirstNet reimbursements.'

AT&T cuts jobs in "declining" business units

When contacted by Ars, AT&T didn't deny any of the CWA's findings about job cuts. 'We continue to hire in areas where we're seeing increasing demand for products and services, but technology is changing rapidly, and that affects hiring and employment,' AT&T told Ars. 'There are fewer jobs in parts of the business that are declining and facing technology shifts.'

AT&T also said that it 'recently opened new 500-seat call centers in Chicago; Houston; Sunrise, Fla.; and Mesa, Ariz. and that '[m]ost of the jobs at these call centers will be filled by union-represented employees.'

AT&T said it takes several steps to keep existing employees despite lowering its overall workforce. AT&T said:

We work very hard to keep employees through these transitions: through normal attrition when possible, follow-the-work offers (frequently with a relocation allowance), internal job-matching programs, and voluntary severance offers.

Many union-represented employees have a job offer guarantee that ensures they are offered another job with the company if their current job is eliminated.

When we wrote about AT&T layoffs in January this year, AT&T told Ars that 'we hired more than 20,000 new employees last year and more than 17,000 the year before.' But the company's financial statements make it clear that new hirings fell far short of job cuts.

'While AT&T responds to criticism of its massive job cuts with boasts about hiring, hiring to address turnover is not the same as job creation,' the CWA said yesterday.

All Comments: [-] | anchor

cosmic_ape(10000) 6 days ago [-]

If that's true, isn't there some organization in the government that can sue them for this?

TallGuyShort(3081) 6 days ago [-]

If there was, that organization would announce an investigation that was quickly concluded with a redacted report and confidential hearing and no further action. The chairman of said agency would then mysteriously get a high-paid position as an advisor to the AT&T board of directors, a job that necessitates a young female intern and frequently requires travel to countries with legal prostitution.

padseeker(4014) 6 days ago [-]

Why would the government currently being run by people who are perfectly happy with this want to sue?

It brings up an interesting point. The supposed motivation of the tax cuts was under the guise that it would increase jobs and mean more money for employees. I have a hard time believing anyone who voted for the tax cut actually believed that. And the people lobbying for the tax cuts are not legally obligated to make good on any of the things they said would happen when the tax cuts went through.

The most likely thing is the people who were elected and passed the tax cuts are held accountable and then the cuts are rolled back but don't hold your breath.

javagram(10000) 6 days ago [-]