Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

December 09, 2019 15:07

Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: The Lesson to Unlearn (December 07, 2019: 1173 points)

(1280) The Lesson to Unlearn

1280 points 2 days ago by adunk in 2083rd position

paulgraham.com | Estimated reading time – 22 minutes | comments | anchor

December 2019

The most damaging thing you learned in school wasn't something you learned in any specific class. It was learning to get good grades.

When I was in college, a particularly earnest philosophy grad student once told me that he never cared what grade he got in a class, only what he learned in it. This stuck in my mind because it was the only time I ever heard anyone say such a thing.

For me, as for most students, the measurement of what I was learning completely dominated actual learning in college. I was fairly earnest; I was genuinely interested in most of the classes I took, and I worked hard. And yet I worked by far the hardest when I was studying for a test.

In theory, tests are merely what their name implies: tests of what you've learned in the class. In theory you shouldn't have to prepare for a test in a class any more than you have to prepare for a blood test. In theory you learn from taking the class, from going to the lectures and doing the reading and/or assignments, and the test that comes afterward merely measures how well you learned.

In practice, as almost everyone reading this will know, things are so different that hearing this explanation of how classes and tests are meant to work is like hearing the etymology of a word whose meaning has changed completely. In practice, the phrase 'studying for a test' was almost redundant, because that was when one really studied. The difference between diligent and slack students was that the former studied hard for tests and the latter didn't. No one was pulling all-nighters two weeks into the semester.

Even though I was a diligent student, almost all the work I did in school was aimed at getting a good grade on something.

To many people, it would seem strange that the preceding sentence has a 'though' in it. Aren't I merely stating a tautology? Isn't that what a diligent student is, a straight-A student? That's how deeply the conflation of learning with grades has infused our culture.

Is it so bad if learning is conflated with grades? Yes, it is bad. And it wasn't till decades after college, when I was running Y Combinator, that I realized how bad it is.

I knew of course when I was a student that studying for a test is far from identical with actual learning. At the very least, you don't retain knowledge you cram into your head the night before an exam. But the problem is worse than that. The real problem is that most tests don't come close to measuring what they're supposed to.

If tests truly were tests of learning, things wouldn't be so bad. Getting good grades and learning would converge, just a little late. The problem is that nearly all tests given to students are terribly hackable. Most people who've gotten good grades know this, and know it so well they've ceased even to question it. You'll see when you realize how naive it sounds to act otherwise.

Suppose you're taking a class on medieval history and the final exam is coming up. The final exam is supposed to be a test of your knowledge of medieval history, right? So if you have a couple days between now and the exam, surely the best way to spend the time, if you want to do well on the exam, is to read the best books you can find about medieval history. Then you'll know a lot about it, and do well on the exam.

No, no, no, experienced students are saying to themselves. If you merely read good books on medieval history, most of the stuff you learned wouldn't be on the test. It's not good books you want to read, but the lecture notes and assigned reading in this class. And even most of that you can ignore, because you only have to worry about the sort of thing that could turn up as a test question. You're looking for sharply-defined chunks of information. If one of the assigned readings has an interesting digression on some subtle point, you can safely ignore that, because it's not the sort of thing that could be turned into a test question. But if the professor tells you that there were three underlying causes of the Schism of 1378, or three main consequences of the Black Death, you'd better know them. And whether they were in fact the causes or consequences is beside the point. For the purposes of this class they are.

At a university there are often copies of old exams floating around, and these narrow still further what you have to learn. As well as learning what kind of questions this professor asks, you'll often get actual exam questions. Many professors re-use them. After teaching a class for 10 years, it would be hard not to, at least inadvertently.

In some classes, your professor will have had some sort of political axe to grind, and if so you'll have to grind it too. The need for this varies. In classes in math or the hard sciences or engineering it's rarely necessary, but at the other end of the spectrum there are classes where you couldn't get a good grade without it.

Getting a good grade in a class on x is so different from learning a lot about x that you have to choose one or the other, and you can't blame students if they choose grades. Everyone judges them by their grades —graduate programs, employers, scholarships, even their own parents.

I liked learning, and I really enjoyed some of the papers and programs I wrote in college. But did I ever, after turning in a paper in some class, sit down and write another just for fun? Of course not. I had things due in other classes. If it ever came to a choice of learning or grades, I chose grades. I hadn't come to college to do badly.

Anyone who cares about getting good grades has to play this game, or they'll be surpassed by those who do. And at elite universities, that means nearly everyone, since someone who didn't care about getting good grades probably wouldn't be there in the first place. The result is that students compete to maximize the difference between learning and getting good grades.

Why are tests so bad? More precisely, why are they so hackable? Any experienced programmer could answer that. How hackable is software whose author hasn't paid any attention to preventing it from being hacked? Usually it's as porous as a colander.

Hackable is the default for any test imposed by an authority. The reason the tests you're given are so consistently bad —so consistently far from measuring what they're supposed to measure — is simply that the people creating them haven't made much effort to prevent them from being hacked.

But you can't blame teachers if their tests are hackable. Their job is to teach, not to create unhackable tests. The real problem is grades, or more precisely, that grades have been overloaded. If grades were merely a way for teachers to tell students what they were doing right and wrong, like a coach giving advice to an athlete, students wouldn't be tempted to hack tests. But unfortunately after a certain age grades become more than advice. After a certain age, whenever you're being taught, you're usually also being judged.

I've used college tests as an example, but those are actually the least hackable. All the tests most students take their whole lives are at least as bad, including, most spectacularly of all, the test that gets them into college. If getting into college were merely a matter of having the quality of one's mind measured by admissions officers the way scientists measure the mass of an object, we could tell teenage kids 'learn a lot' and leave it at that. You can tell how bad college admissions are, as a test, from how unlike high school that sounds. In practice, the freakishly specific nature of the stuff ambitious kids have to do in high school is directly proportionate to the hackability of college admissions. The classes you don't care about that are mostly memorization, the random 'extracurricular activities' you have to participate in to show you're 'well-rounded,' the standardized tests as artificial as chess, the 'essay' you have to write that's presumably meant to hit some very specific target, but you're not told what.

As well as being bad in what it does to kids, this test is also bad in the sense of being very hackable. So hackable that whole industries have grown up to hack it. This is the explicit purpose of test-prep companies and admissions counsellors, but it's also a significant part of the function of private schools.

Why is this particular test so hackable? I think because of what it's measuring. Although the popular story is that the way to get into a good college is to be really smart, admissions officers at elite colleges neither are, nor claim to be, looking only for that. What are they looking for? They're looking for people who are not simply smart, but admirable in some more general sense. And how is this more general admirableness measured? The admissions officers feel it. In other words, they accept who they like.

So what college admissions is a test of is whether you suit the taste of some group of people. Well, of course a test like that is going to be hackable. And because it's both very hackable and there's (thought to be) a lot at stake, it's hacked like nothing else. That's why it distorts your life so much for so long.

It's no wonder high school students often feel alienated. The shape of their lives is completely artificial.

But wasting your time is not the worst thing the educational system does to you. The worst thing it does is to train you that the way to win is by hacking bad tests. This is a much subtler problem that I didn't recognize until I saw it happening to other people.

When I started advising startup founders at Y Combinator, especially young ones, I was puzzled by the way they always seemed to make things overcomplicated. How, they would ask, do you raise money? What's the trick for making venture capitalists want to invest in you? The best way to make VCs want to invest in you, I would explain, is to actually be a good investment. Even if you could trick VCs into investing in a bad startup, you'd be tricking yourselves too. You're investing time in the same company you're asking them to invest money in. If it's not a good investment, why are you even doing it?

Oh, they'd say, and then after a pause to digest this revelation, they'd ask: What makes a startup a good investment?

So I would explain that what makes a startup promising, not just in the eyes of investors but in fact, is growth. Ideally in revenue, but failing that in usage. What they needed to do was get lots of users.

How does one get lots of users? They had all kinds of ideas about that. They needed to do a big launch that would get them 'exposure.' They needed influential people to talk about them. They even knew they needed to launch on a tuesday, because that's when one gets the most attention.

No, I would explain, that is not how to get lots of users. The way you get lots of users is to make the product really great. Then people will not only use it but recommend it to their friends, so your growth will be exponential once you get it started.

At this point I've told the founders something you'd think would be completely obvious: that they should make a good company by making a good product. And yet their reaction would be something like the reaction many physicists must have had when they first heard about the theory of relativity: a mixture of astonishment at its apparent genius, combined with a suspicion that anything so weird couldn't possibly be right. Ok, they would say, dutifully. And could you introduce us to such-and-such influential person? And remember, we want to launch on Tuesday.

It would sometimes take founders years to grasp these simple lessons. And not because they were lazy or stupid. They just seemed blind to what was right in front of them.

Why, I would ask myself, do they always make things so complicated? And then one day I realized this was not a rhetorical question.

Why did founders tie themselves in knots doing the wrong things when the answer was right in front of them? Because that was what they'd been trained to do. Their education had taught them that the way to win was to hack the test. And without even telling them they were being trained to do this. The younger ones, the recent graduates, had never faced a non-artificial test. They thought this was just how the world worked: that the first thing you did, when facing any kind of challenge, was to figure out what the trick was for hacking the test. That's why the conversation would always start with how to raise money, because that read as the test. It came at the end of YC. It had numbers attached to it, and higher numbers seemed to be better. It must be the test.

There are certainly big chunks of the world where the way to win is to hack the test. This phenomenon isn't limited to schools. And some people, either due to ideology or ignorance, claim that this is true of startups too. But it isn't. In fact, one of the most striking things about startups is the degree to which you win by simply doing good work. There are edge cases, as there are in anything, but in general you win by getting users, and what users care about is whether the product does what they want.

Why did it take me so long to understand why founders made startups overcomplicated? Because I hadn't realized explicitly that schools train us to win by hacking bad tests. And not just them, but me! I'd been trained to hack bad tests too, and hadn't realized it till decades later.

I had lived as if I realized it, but without knowing why. For example, I had avoided working for big companies. But if you'd asked why, I'd have said it was because they were bogus, or bureaucratic. Or just yuck. I never understood how much of my dislike of big companies was due to the fact that you win by hacking bad tests.

Similarly, the fact that the tests were unhackable was a lot of what attracted me to startups. But again, I hadn't realized that explicitly.

I had in effect achieved by successive approximations something that may have a closed-form solution. I had gradually undone my training in hacking bad tests without knowing I was doing it. Could someone coming out of school banish this demon just by knowing its name, and saying begone? It seems worth trying.

Merely talking explicitly about this phenomenon is likely to make things better, because much of its power comes from the fact that we take it for granted. After you've noticed it, it seems the elephant in the room, but it's a pretty well camouflaged elephant. The phenomenon is so old, and so pervasive. And it's simply the result of neglect. No one meant things to be this way. This is just what happens when you combine learning with grades, competition, and the naive assumption of unhackability.

It was mind-blowing to realize that two of the things I'd puzzled about the most — the bogusness of high school, and the difficulty of getting founders to see the obvious — both had the same cause. It's rare for such a big block to slide into place so late.

Usually when that happens it has implications in a lot of different areas, and this case seems no exception. For example, it suggests both that education could be done better, and how you might fix it. But it also suggests a potential answer to the question all big companies seem to have: how can we be more like a startup? I'm not going to chase down all the implications now. What I want to focus on here is what it means for individuals.

To start with, it means that most ambitious kids graduating from college have something they may want to unlearn. But it also changes how you look at the world. Instead of looking at all the different kinds of work people do and thinking of them vaguely as more or less appealing, you can now ask a very specific question that will sort them in an interesting way: to what extent do you win at this kind of work by hacking bad tests?

It would help if there was a way to recognize bad tests quickly. Is there a pattern here? It turns out there is.

Tests can be divided into two kinds: those that are imposed by authorities, and those that aren't. Tests that aren't imposed by authorities are inherently unhackable, in the sense that no one is claiming they're tests of anything more than they actually test. A football match, for example, is simply a test of who wins, not which team is better. You can tell that from the fact that commentators sometimes say afterward that the better team won. Whereas tests imposed by authorities are usually proxies for something else. A test in a class is supposed to measure not just how well you did on that particular test, but how much you learned in the class. While tests that aren't imposed by authorities are inherently unhackable, those imposed by authorities have to be made unhackable. Usually they aren't. So as a first approximation, bad tests are roughly equivalent to tests imposed by authorities.

You might actually like to win by hacking bad tests. Presumably some people do. But I bet most people who find themselves doing this kind of work don't like it. They just take it for granted that this is how the world works, unless you want to drop out and be some kind of hippie artisan.

I suspect many people implicitly assume that working in a field with bad tests is the price of making lots of money. But that, I can tell you, is false. It used to be true. In the mid-twentieth century, when the economy was composed of oligopolies, the only way to the top was by playing their game. But it's not true now. There are now ways to get rich by doing good work, and that's part of the reason people are so much more excited about getting rich than they used to be. When I was a kid, you could either become an engineer and make cool things, or make lots of money by becoming an 'executive.' Now you can make lots of money by making cool things.

Hacking bad tests is becoming less important as the link between work and authority erodes. The erosion of that link is one of the most important trends happening now, and we see its effects in almost every kind of work people do. Startups are one of the most visible examples, but we see much the same thing in writing. Writers no longer have to submit to publishers and editors to reach readers; now they can go direct.

The more I think about this question, the more optimistic I get. This seems one of those situations where we don't realize how much something was holding us back until it's eliminated. And I can foresee the whole bogus edifice crumbling. Imagine what happens as more and more people start to ask themselves if they want to win by hacking bad tests, and decide that they don't. The kinds of work where you win by hacking bad tests will be starved of talent, and the kinds where you win by doing good work will see an influx of the most ambitious people. And as hacking bad tests shrinks in importance, education will evolve to stop training us to do it. Imagine what the world could look like if that happened.

This is not just a lesson for individuals to unlearn, but one for society to unlearn, and we'll be amazed at the energy that's liberated when we do.


[1] If using tests only to measure learning sounds impossibly utopian, that is already the way things work at Lambda School. Lambda School doesn't have grades. You either graduate or you don't. The only purpose of tests is to decide at each stage of the curriculum whether you can continue to the next. So in effect the whole school is pass/fail.

[2] If the final exam consisted of a long conversation with the professor, you could prepare for it by reading good books on medieval history. A lot of the hackability of tests in schools is due to the fact that the same test has to be given to large numbers of students.

[3] Learning is the naive algorithm for getting good grades.

[4] Hacking has multiple senses. There's a narrow sense in which it means to compromise something. That's the sense in which one hacks a bad test. But there's another, more general sense, meaning to find a surprising solution to a problem, often by thinking differently about it. Hacking in this sense is a wonderful thing. And indeed, some of the hacks people use on bad tests are impressively ingenious; the problem is not so much the hacking as that, because the tests are hackable, they don't test what they're meant to.

[5] The people who pick startups at Y Combinator are similar to admissions officers, except that instead of being arbitrary, their acceptance criteria are trained by a very tight feedback loop. If you accept a bad startup or reject a good one, you will usually know it within a year or two at the latest, and often within a month.

[6] I'm sure admissions officers are tired of reading applications from kids who seem to have no personality beyond being willing to seem however they're supposed to seem to get accepted. What they don't realize is that they are, in a sense, looking in a mirror. The lack of authenticity in the applicants is a reflection of the arbitrariness of the application process. A dictator might just as well complain about the lack of authenticity in the people around him.

[7] By good work, I don't mean morally good, but good in the sense in which a good craftsman does good work.

[8] There are borderline cases where it's hard to say which category a test falls in. For example, is raising venture capital like college admissions, or is it like selling to a customer?

[9] Note that a good test is merely one that's unhackable. Good here doesn't mean morally good, but good in the sense of working well. The difference between fields with bad tests and good ones is not that the former are bad and the latter are good, but that the former are bogus and the latter aren't. But those two measures are not unrelated. As Tara Ploughman said, the path from good to evil goes through bogus.

[10] People who think the recent increase in economy inequality is due to changes in tax policy seem very naive to anyone with experience in startups. Different people are getting rich now than used to, and they're getting much richer than mere tax savings could make them.

[11] Note to tiger parents: you may think you're training your kids to win, but if you're training them to win by hacking bad tests, you are, as parents so often do, training them to fight the last war.

Thanks to Austen Allred, Trevor Blackwell, Patrick Collison, Jessica Livingston, Robert Morris, and Harj Taggar for reading drafts of this.

All Comments: [-] | anchor

knzhou(10000) 2 days ago [-]

As somebody who has designed some of the tests Paul Graham is complaining about, it really is hard from the other side!

For example, it's long been known in the physics education research community that students come away from introductory courses with very little physical understanding, even if they can do the plug and chug problems on typical tests just fine. Students can all recite Newton's third law, but immediately afterward claim that when a truck hits a car, the truck exerts a bigger force. They know the law for the gravitational force, but can't explain what kept astronauts from falling off the moon, since 'there's no gravity in space'. Another common claim is that a table exerts no force on something sitting on it -- instead of 'exerting a force' it's just 'getting in the way'.

For research purposes, we measure physical understanding using a battery of tests, such as the Force Concept Inventory, containing only simple conceptual questions with unambiguous answers. So then everybody asks: if ordinary tests are so hackable, why not just switch to these conceptual ones? But that wouldn't work. There are less than ~100 distinct FCI-style questions. If these conceptual tests were the norm, students would just memorize the answers and parrot them back, with a flimsy understanding that crumples the second any follow-up question is asked. It would be just the same problem as before, except they would be worse computationally, too. The FCI only works as long as it doesn't count for a grade.

The problem isn't tests, it's scale. If the people aren't motivated, any standardized measure will miss the mark -- even the entrepreneurship Paul Graham advocates for. God knows I've seen a lot of bullshit in that direction.

gliese1337(3659) 1 day ago [-]

To illustrate just how hard: much of my work in academia was assisting in designing experiments or analyzing data for other people working on PhDs in 'how to design decent tests' (i.e., Instructional Psychology). There was an entire department of the university dedicated to studying just that problem.

rglullis(2367) 2 days ago [-]

> If the people aren't motivated, any standardized measure will miss the mark

Shouldn't that be a sign that trying to come up with standardized measures is a fool's errand?

Short of blatant social engineering, I don't see how we can have a system that could improve people's motivation and drive for education. And if we don't have a way to actually change human nature, why do we keep trying to design this system that would magically work for so many different people?

misnome(4041) 2 days ago [-]

Are there any resources good for finding out more about this and such studies? Myself, I went through this process in physics, and in many cases it was years afterwards that I properly understood the implications and context of the early courses, and only through repeat exposure could grok the actual connections in the problems. It seems especially hard to test and evaluate something that is planting the seeds for a more complete understanding, maybe years down the line.

ken(3749) 2 days ago [-]

Science education is hard. I was constantly frustrated in science classes, especially physics, because every 2 minutes I wanted to say 'but then why ___?' Every new thing you learn seems to contradict either common sense (this rock clearly falls faster than this paper!), or something else you've been taught (you mean F=ma was never actually true? and which other equations might turn out to need correcting?).

I recognize it's not feasible for a class of students to constantly interrupt a lecture. The questions that other students had were never the same ones that I had, either. The problem is indeed scale, but I don't think motivation is a primary factor. Eventually you learn to be quiet and accept what the teacher and the textbook say.

contingencies(3232) 1 day ago [-]

> any standardized measure will miss the mark

Goodhart's Law (Popular formulation): When a measure becomes a target, it ceases to be a good measure.

... via https://github.com/globalcitizen/taoup

taeric(2589) 2 days ago [-]

It is amusing how many of my peers still hold to the notion that heavier things fall faster. Not a higher terminal velocity, but the intuition that they accelerate faster. Hilariously tough for folks to shake.

mvaliente2001(10000) about 20 hours ago [-]

Yes, it's hard in both sides. I'm one of those people who got good grades in high school physics (I was able to solve problems), and still thought that satellites were kept in orbit because 'there's not gravity' (or is too small). And I was genuinely interested in learning.

Part of the problem is the lack of time. Both teachers and students need to cover all the topics in one semester/year, and it takes time to assimilate and integrate the knowledge. Add to that that as student, you don't know what's important, even when you're told so, if you're told at all. That's why we can hold contradictory models in our minds. They're good enough to solve some problems, but conceptually wrong.

I think that a possible solution would be to keep the grades open. You get a B in physics-101, since you got a flawed model in your mind, you got to physics-201 where new experiences show the problems in your model and force you to correct it, and now you prove you achieved a knowledge that deserves an A.

I know, that proposal it's impractical, and time consuming, but I think that's how most of us really learn.

xmprt(10000) 1 day ago [-]

One of the best physics tests that I took was one where each question has a sense of progression. So the first part laid out a fundamental law of physics and the question was to prove something using it. Then using the result from that the problem was tweaked in a few ways for a few more questions until you were far removed from the original concept but could still explain why and how something worked.

To take your gravity example, the first question might be something about calculating gravitational force on the ISS. Then explain that it's rotating in space and therefore has centripetal force that accounts for this gravity. Then compute how fast it must be rotating. And from there you can switch to a completely different topic in the same context. Maybe something about EM radiation from the Sun that falls on the space station.

pergadad(10000) about 18 hours ago [-]

I do work in a related field and give your answer really sad. The answer is simply that standardised tests are not a suitable measure of learning. They miss most of the important insights and even efforts to open up, like the attempt to measure 'problem solving'in PISA, are damned to stick to very narrow concepts.

Standardized testing is good for:

- measuring system performance against set standards - giving basic insights into student's average level of knowledge of a subject (if you use a thorough theoretical framework to conceptualise learning outcomes and link test items to this) - maybe measuring school performance against standards - maybe measuring teacher performance against standards

But for students this individual measure of an exam is simply not a good measure of learning in any modern education system. Yes it is a useful measure in really bad systems where teachers are bad and teach wrong things, or where eg attendance and exam fraud are huge issues.

But modern pedagogy is not about teaching book knowledge which is easily measured. In any Western education system we are mostly past that point. It's about helping students discover own interests and abilities, develop competence and confidence, experiment, fail and try again. This learning is simply too individual and hard to measure for a standardised test.

The only option is formative assessment (essentially teachers grading students repeatedly throughout the year on things like effort, achievement of general learning aims, additional development, results produced, presentations, support to other students, etc - a bit how you would measure a good office worker). Or maybe let's do like Finland and Estonia and let's mostly eliminate the testing burden.

woodandsteel(3502) about 15 hours ago [-]

The problem is that so much of physics is quite counter-intuitive (that is one reason it took so long to discover).

The basic principle of education is to start where the student is and then lead them step-by-step to what you want them to understand. That being the case, I would say physics should start with student's intuitive understanding and then explain, like through examples they can understand, why some of their intuitions are mistaken. Maybe this could even be turned into Socratic questioning.

maxerickson(1019) 2 days ago [-]

Ever look at Keller Plan stuff?

I did Physics 2 that way and really liked it.


(also, wow, I just got that Principle Skinner in the Simpsons doesn't accidentally have that name. Face to palm.)

flyGuyOnTheSly(4040) 2 days ago [-]

Funny to read and enjoy this article this morning...

Open up HN again this afternoon for a quick work break...

And find my eyes IMMEDIATELY glancing at my karma score in the top right before anything else.

Perhaps PG should practice what he preaches and rip that out of the UI?

I would encourage it... damnit... I just looked again.

darkkindness(10000) 2 days ago [-]

Just inject:

td:nth-child(3) > span.pagetop { color: transparent; }

Hides karma, leaves user and logout intact (as they're links)

dnprock(3839) 1 day ago [-]

I feel like Paul Graham is unlearning his own experience. Maybe, he's at the top of the game and tries to figure out why it's messed up. Life is full of hacks. People hack stuff: school, test, career. The site we're on now called itself Hacker News. I suppose Paul Graham gave it the name.

When I was in school, I did all kinds of hacks. I didn't graduate high school in the US. I took the GED to go to college. I did challenge tests to skip classes and save money. I got to know people who collected old tests. They trade old tests with me for homework help. Everyone graduated, got jobs, and continued their life hacks.

The focus on learning and doing is important. As long as we're delivering, we can hack some stuff. The philosophy debate between Kant and Locke will never end. There's a balance for each person. We need to discover it ourselves.

GavinMcG(4160) 1 day ago [-]

It seems like you're responding to the idea that 'hacking is bad' and that's not at all what the essay got across to me. In fact, it outright agrees that 'the focus on learning and doing is important.'

Instead of 'hacking is bad', the essay was saying 'incentivizing young people with a single easily-hacked methodology is bad'.

gcc_programmer(10000) 2 days ago [-]

TL; DR; Einstein said sth along the lines of 'Don't let your education get in the way of your knowledge.' Finish uni with high grades then spend the rest of your life learning.

username90(10000) 2 days ago [-]

> Finish uni with high grades then spend the rest of your life learning.

I'd rather not waste 30% of my life on that nonsense. Better to use uni for what it is meant for: Learn whatever seems interesting and don't mind the grades.

manmal(4170) 2 days ago [-]

My wife and I are contemplating unschooling our kids. The longer our son is in primary school, the more we see the deterioration of his willingness to learn anything school-related. He hates homework (as do most kids), and this is even more painful to see given that it has been shown that homework is essentially useless for learning performance. He still likes to write and do math, but as long as he is in school, we have to take care that his interest doesn't go south.

During school holidays, he actually starts to do school exercises for fun; those are the very same exercises he would fight against doing for hours on a regular school day. Go figure.

Btw, Gates, Page, Brin - all were not in a conventional school. The capacity to just build things without fear of being judged is invaluable.

pbhjpbhj(4000) 2 days ago [-]

Alternative option, tell the school he's not doing homework - they're providing a service to parents to aid with a child education, if that's not aiding then change the contract.

FWIW the UK Education Act actually enshrines this concept (parents are responsible for a child being educated), though this has been eroded considerably by Tory policy in the last decade or so. We did tell our primary school 'don't expect homework from $child[0]' one year, they didn't complain. However we also wanted to do flexschooling and the head refused it, which I'm still smarting over and consider to have been a significant detriment.

That same primary school ditched regular style homework in favour of projects, which $child[1] wants to do, and we can decide easily to curtail the time used doing it if we want. It's a great way of doing homework IMO; very child-centric.

jraby3(4211) 2 days ago [-]

We thought the same way but it's difficult to replace the social component. Still it's hard to see how bad the school system is and force our kids to participate. It could be so much better.

eeZah7Ux(3214) 2 days ago [-]

> The longer our son is in primary school, the more we see the deterioration of his willingness to learn anything school-related

Been there myself.

Most school systems have been developed to produce workers, not scholars and researchers.

Hence, rota memorization in prioritized.

Getting people to hate learning is not seen as a serious failure of the school system.

Also, students are never encouraged to question the information they receive or the decision around what is important to learn and what is not.

rb808(2997) 2 days ago [-]

> Gates, ... - all were not in a conventional school

wikipedia said he went to a regular private school then went to Harvard which sounds pretty conventional. Yeah he didn't finish, but worry about that later.

tsimionescu(10000) 2 days ago [-]

> it has been shown that homework is essentially useless for learning performance

I am curious, is this about some particular kind of homework? From my own experience, I would think that exercising a subject alone (which is what most homework was in my school system at least) is a very good way of understanding it better. I of course remember not wanting to do homework many times, and I remember excessive or idiotic homework assignments, but overall homework seemed to be one of the major modes of learning for me.

Also, related to Gates, Page, Brin - I don't generally think that it's a good idea to look at a small number of very successful people and try to emulate some part of their life experience - you are, în general, very likely to fall for some kind of survivorship bias, or accidentally home in on a less important detail (for example, Gates was probably helped much more by his extremely wealthy and well-connected family than any particular aspect of his schooling).

On the other hand, there are many voices saying that one of the important purposes of the public education system is to engender conformity, which is very rarely conducive to a truly out of the ordinary life. Noam Chomsky would be one of the most credible, and he has often credited his own career to his non-standard early schooling.

mattchew(10000) 2 days ago [-]

It's worth contemplating. We homeschooled our kids and it was a good choice for our family.

I'm sympathetic to unschooling, but I've also seen it go wrong. If you decide to unschool, don't be dogmatic about it. Feel free to be more or less unschooly if that seems to be needed. Don't get sucked into a community where unschooling 'purity' is highly valued.

avip(4083) 2 days ago [-]

All true known and well established. But homeschooling requires a parent to sacrifice anything else and be devoted for that. Not an easy decision. It's much easier to find a good school!

nmfisher(10000) 2 days ago [-]

I'm curious about Page & Brin - I wasn't aware they didn't go through conventional schooling. What was their background?

prox(10000) 2 days ago [-]

I remember this history teacher who was a breath of fresh air. He told us in advance his test where very simple. And the reason was quite genius: if you just listened and took a few notes here and there, you'd pass the test. Which freed the mind to really absorb the knowledge and have fun with the subject matter. He was also very good in discerning knowledge and teaching why a certain topic was good to know. I think I remember more from those classes than any other.

coffeemug(452) 2 days ago [-]

Don't know about Page and Brin, but according to the Netflix documentary Gates won state-level math Olympiad at a level three grades higher than his own. So he wasn't a modal child. You can't draw any conclusions from that about education at scale.

volume(10000) 2 days ago [-]

'The capacity to just build things without fear of being judged is invaluable'

For me, I had to unlearn self judgement. This has helped so much over the last few years.

username90(10000) 2 days ago [-]

> He hates homework (as do most kids)

I just didn't do my homework and got bad grades. Luckily in my country you can always get in via standardized tests instead of grades so I did well in life anyway. I'm not sure why homework is weighed so heavily in grades, it mostly just shows how much free time the students are willing to sacrifice. The biggest cost this had for me was the guilt that I hadn't done what I was supposed to do, which made me feel like a bad person, but in hindsight I did much better than most of my more studious peers so it really wasn't all that important.

anonytrary(4070) 2 days ago [-]

> Btw, Gates, Page, Brin - all were not in a conventional school.

This is Anecdata^3, you probably shouldn't make rash decisions about your children based on 3 successful technical entrepreneurs.

watwut(10000) 2 days ago [-]

> but as long as he is in school, we have to take care that his interest doesn't go south. [...] During school holidays, he actually starts to do school exercises for fun; those are the very same exercises he would fight against doing for hours on a regular school day. Go figure.

That does not sound like exercises themselves would be bad or that he would cease to be interested in learning overall.

It more sounds like after many hours at school, he wants to do different things that day. But when school is not present, he seeks similar activities and his willingness to learn is not deteriorated. That kind of sounds healthy and not wrong at all. In school, kids spend a lot of time learning or doing focused activities, a kid wanted to just play after that should not be sign of something grave.

The only thing that is wrong or unusual is the hours long fight against homework. I dont think that is normal for majority of kids. Only some parents I know report that much daily fights, most have fight here and there once in a while only.

shotashotashota(10000) 2 days ago [-]

What about socializing?

choppaface(4108) 1 day ago [-]

"But you can't blame teachers if their tests are hackable. Their job is to teach, not to create unhackable tests. The real problem is grades, or more precisely, that grades have been overloaded."

Two important things here: * Grade inflation makes for happier students and counterbalances the value of test hacking. Today there are very few schools that don't do grade inflation. Without considering grade inflation, much of this essay is simply Graham entertaining his own nostalgia. (I'm not necessarily a proponent of inflation but it's a key phenomena missing from the essay).

* Good teachers hold lots of office hours and give good feedback through those offerings. Email support is also a lot more popular today. You can also play games like test corrections, rough drafts, etc. Students don't learn well without feedback, and most teachers who don't invest in it won't succeed today. This focus towards feedback in modern teaching is at odds with the education community profiled in this essay.

If there's one lesson to un-learn, it's to forget about giving attention to non-constructive papers that lack evidence. While this essay gives a thorough criticism of tests, it offers no concrete alternative. And it offers no grounding for the claimed 'good' student who 'focuses on valuable learning.' Graham, like any other VC, seeks to upwell sentiment in the interest of controlling it (e.g. creating an investment asset out of it). Reading this essay doesn't teach you anything about hacking unless you recognize that the author is trying to hack you himself.

pmichaud(3551) 1 day ago [-]

The suggested alternative I got from the essay was 'try to cause a real, specific outcome in the world. If the action you took caused the outcome, you passed the test.'

vezycash(272) 2 days ago [-]

>The real problem is that most tests don't come close to measuring what they're supposed to

This is what happens when the teacher's hidden goal is to make marking/grading easier.

TeMPOraL(2647) 2 days ago [-]

Couple that with students' hidden goal being 'not ruining my career prospects and/or angering my parents by getting bad grades', and you arrive at modern school.

astatine(10000) 2 days ago [-]

I don't think students are the blameless victim of this 'system'. I know of a continuous evaluation system adopted in a college, where the final tests had a less than 50% weightage for the grade. Thr students rebelled. It was far easier for a majority to burn the midnight oil for a few weeks than be diligent through the course. The test model has stuck because it's convenient for everyone, regardless of its clear weaknesses.

matwood(10000) 2 days ago [-]

I'm one of those weird outliers I guess, but even when there were only 1-2 exams for the whole course I stayed diligent throughout. The night before a test I just did a bit of review, and got a good nights sleep. I never understood those who tried to cram because it's not something that would ever work for how I learn and understand.

hooande(3727) 2 days ago [-]

I know that learning how to hack systems is valuable because that's how I got into YCombinator. I set out specifically to hack the application process and succeeded. It wasn't even that difficult to do.

If pg believes what he's saying in this essay, I have two questions:

1. why did he design a testing system that was easily hacked?

2. why does YCombinator itself focus so much on hacking the venture capital system, as opposed to making great products? [1]

I think the answer to both is that the ideas presented in this essay don't scale. If you can't create a 200 person per year startup incubator that focuses on true learning instead of process hacking, then how could someone do it for a 20,000 person university? Things get boiled down to metrics like grades and valuations because they are the only way to measure/manage the productive output of a large number of people.

The insight that I've had is that as long as large civilizations and organizations exist, there will be easily hackable systems. There is a direct relationship between the number of humans involved in a thing, and the number of opportunities to hack that thing. Understanding and exploiting this seems like a valuable ability.

[1] I was personally struck by how much of YCombinator was (in 2008) oriented around learning what magic words to say to investors and exactly when to time techcrunch launch articles, etc. There seemed to be no time given to 'how to make a great product', likely because this is something that can't be taught

dunkelheit(4176) 1 day ago [-]

Observation: creators and maintainers of hackable systems will typically spend a lot of effort to convince you that these systems are not hackable and absolutely reward only 'honest effort'. Pg's essay meshes very well with this observation.

Ozzie_osman(4150) 2 days ago [-]

Spot on. If you're interested in this sort of stuff, I'd recommend the book Seeing Like A State. The author basically walks through most of history as viewed through the lens of authorities trying to measure things (usually to extract taxes or exercise control). But basically traces everything from the layout of cities to how we name ourselves down to authority trying to standardize things so they can measure them.

wrinklytidbits(10000) 2 days ago [-]

I think what he's saying is that the approach is wrong. The highly industrialized school system is the way it is because of scale. His perspective is the initial point (paraphrasing: tests should be the same as taking a blood test).

To add to that point, take a urine test: people have been hacking it for years. The purpose of the urine test is to see what residual byproducts from certain drugs are in my system. I can hack it by using someone else's urine or imbibe on drugs that have a short lifetime in my bloodstream. Otherwise, to pass the test I have to not consume drugs.

His complaint is that people who pass the tests really shouldn't have. It's like a principal assuring parents that a strict urine exam is given to all teachers only to find out all the teachers who work for you are drug addicts (drug addicts who can hack the pee test).

His reference to private schools teaching students how to hack tests resonates with me as social workers teaching drug users how to hack their pee tests. I take that to mean that if teachers are doing their best but resort to teaching how to hack tests then their needs to be more work done outside of the teacher-student system.

[This is where disparities due to parent's wealth and income come in:] Students who have access to tutors can get the help they in a way having a teacher can't. A teacher can be a tutor, but cannot be a tutor for everyone (an issue with scale).

lostcolony(4189) 2 days ago [-]

This was my immediate objection as well. School, as it stands, was actually -incredibly- useful to me. Because it taught me that the stated objectives of a person, organization, etc, are often lies, and that the trick is to learn the real objectives, and optimize accordingly. Between classes that actually taught me the material, classes that expected me to learn the material without teaching it to me (but still expected me to complete relevant projects that required understanding), and classes that required me to learn completely separate things (i.e., yes, physics class, where the taught material, homework material, and tested material, were all ENTIRELY DISTINCT FROM EACH OTHER, and so I had to learn to navigate that and reach out for help from TAs and past students and etc), I feel like I got a more complete education in how the world works.

That's not to say 'hack the system', but it is to say look deeper. There's value in learning. There's value in the stated objectives; after all, there's a reason they're being called out. But there often are unstated, important systems and objectives to understand and address, too.

I'd love this dream world PG describes where everything aligns, the stated priorities are the only ones, and the systems and cultures and tests optimize only for them. But that's not the world we live in.

derefr(3684) 2 days ago [-]

> There seemed to be no time given to 'how to make a great product', likely because this is something that can't be taught

It can be taught, but the general principles are weak (it's basically "how to be an effective futurist"), with most of the useful knowledge being domain-specific.

And the thing with the specific domains is, either they're not understood yet—in which case everyone is flailing around—or they are well-understood—in which case the market has already consolidated around the companies who can best operationalize the set of techniques required to create great products for that market, and the best you'll hope to do is to get bought out by one of them, not to compete or replace them.

Because, in the end, having a great product is a multiplier, but so is execution; and execution provides compound interest on its gains, such that corporations inevitably execute exponentially better as they age.

In other words, the only time where you can set out to win a market, and actually have a hope of doing so, is precisely when we don't yet know how to operationalize the production of great products for that domain.

hn_throwaway_99(4107) 2 days ago [-]

My take on this:

Having a great product, building things people want (i.e. the 'non-hackable' parts of building a startup) are necessary but not sufficient requirements for building a successful startup. What YC tries to teach is all the other parts (how to deal with VCs, what metrics to focus on, how to manage your time, etc.) that are also necessary, but not the core of what makes a good company. These are the 'hackable' parts if you will. But you're very likely to be unsuccessful if you only have those hackable parts right, but still don't have a product people find useful and want to buy.

Since you're not posting anonymously, I'd be curious if you could offer more information on what parts of YC you think you hacked, and how your startup turned out.

iamwil(356) 2 days ago [-]

It's because he doesn't believe getting into YC is the achievement. Creating a company with users / profitability / growth is. You can get into YC and still fail with your company. And it's desirable for founders and YC to be more interested in building a successful company.

When he says, 'Make a good product', I think he's trying to convey a core and fundamental principle of business that gets obfuscated by all the things you have to do be successful in a startup. That's necessary when teaching beginners.

But he's traditionally had a hard time conveying this seemingly simple concept, and he's always been puzzled by it. You can see it in the previous essays he referenced in this one--the high school one, and how founders over-complicate things.

Though he's talking about this topic as if creating the company is the goal, I'm not sure to what degree he realizes how YC contributes to the effect he's seeing.

Even after getting into YC, I was moderately surprised by how you're still being judged, sometimes explicitly, and other time implicitly. I had batchmates that didn't want to go into office hours, for fear of appearing weak and hence wouldn't get a follow-on recommendation to investors from a YC partner.

Of course, I think PG would say that's stupid. If you go into office hours and fix your problem, then by virtue of being a better startup, you'll be a more attractive investment. Trying to do it the other way is putting the cart before the horse.

That said, the effect is real. Like the Observer Effect in quantum mechanics, by virtue of being a conduit to further funding, founders will turn their attention to the tests. Because if you don't, it's easy to be written off and not be able to get the help that you need.

coffeemug(452) 2 days ago [-]

Companies are this way too. The best companies are all run by OKRs and metrics. But if you peek under the hood, all of it is insanely hackable; you have to be naive to try and actually meet your explicitly stated objectives because that's not how you get promoted.

It reminds me of a quote from The Elephant in the Brain. I don't remember the wording exactly, but it's something like 'if there is a behavior in a large group of people that doesn't make sense to you, not only is it by design, but it's also probably the whole point.'

Incidentally, I was also struck by how much of YC was (in 2009) about hacking venture capital. But lots of people from that era now say this behavior is unethical disavow all knowledge that it was ever actively promoted. It's like we lived through different realities.

If I were to steelman The Lesson to Unlearn I think I'd say this. On an individual level there is enormous benefit to learning how to operate without hacking a test. That's how you do great work; that's what original thinking is. On a societal level this seems unshakable. But societies have made dramatic shifts in behavior and social organization before. From hunter gatherers to farmers to city dwellers, for example. So while this is hard to imagine it might just be that-- lack of imagination. The social return to abandoning this system might indeed be enormous.

paulsutter(1222) 2 days ago [-]

pg's articles are all about how to build a great product (Startup=Growth, for example). Are these VC tips-and-tricks secret insider-only info? They don't publish anything like that

> YCombinator was (in 2008) oriented around learning what magic words to say to investors and exactly when to time techcrunch launch articles, etc. There seemed to be no time given to 'how to make a great product"

BoiledCabbage(10000) 2 days ago [-]

That's also my largest concern with this essay. PG describes the world they way he wants it to be, but not the world the way it is. And I haven't been inside of YC, but everything I've seen is that YC is essentially program that combines some level of 'support' with essentially learning to hack VCs.

If it were simply build the best product then there wouldn't be a need for YC. If it were simply build the best product there wouldn't be a need for marketing or sales or business degrees.

The hacks involved in marking and sales became so important to any companies success that they've over time become entire fields of their own. Same thing with an MBA - essentially a degree to hack business. Regardless of what you think of it, it's clear based on the financial returns of it that it works.

One of the biggest lessons constantly mentioned of startups is how naive engineers are to the business side of things and there is more than building a great product. In this essay PG takes all of that and throws it out the window. Is that suddenly no longer true? My read of it is that he (rightfully) likes YC and as a result doesn't see how it has the same flaws as the systems he describes. Essentially like every parent who only sees their kid as beautiful.

impendia(3900) 2 days ago [-]

> If you can't create a 200 person per year startup incubator that focuses on true learning instead of process hacking, then how could someone do it for a 20,000 person university?

As a university professor, I can at least try. Not at scale -- certainly not for my entire university -- but for my own classes as an individual.

There are several constraints, of course. I have to give grades, and students care about grades, a lot. These grades should measure (to some extent) how prepared students are to proceed to whatever is next. Students have a number of conflicting priorities, and it is natural to concentrate on whatever is most urgent. And students come with wildly differing backgrounds and interest levels (and amounts of available time), and I have to be consistent and fair to everyone.

But, when I grade, I have a lot of leeway to decide what I believe is worth measuring, and measure that. My aim -- which I hope I at least partially realize -- is to ensure that the best way to 'hack' my classes is to learn what I hope the students will learn.

daly(4187) 2 days ago [-]

I got into 'computer science' before there was a CS degree. Our CS classes were taught in the math dept (fortran), business dept (cobol), or engineering (assembler).

The profs were literally 1 chapter ahead of the students.

I, however, was in love with the subject. I was the 'student advisor' in our 'machine room'. We had 5 teletypes connected to a remote mainframe at Rutgers. I lived and breathed CS.

I wrote tests for the profs (even thought I was also in the class) and I answered questions for students about the tests when they came for computer help.

Pick a subject you love so deeply that you're always at the leading edge (which essentially means a new subject the school wants to teach). For example, CMU introduced an AI curiculum. Learn how to do NNs, DNNs, GANs, etc. Read the latest papers. Write working code. Chat with the profs. Hang around the dept.

Proof systems are a hot topic. Learn LEAN, COQ, AGDA. Learn to write proofs by machine. Read the papers. Get ahead of the curve. It is hard to 'fail' a student who knows a lot more than you, especially when they help craft the tests.

Intel stuck an FPGA inside its CPU. Learn Verilog and learn to hack new CPU instructions (e.g. read Gustufson's book 'The End of Error' that introduces a new kind of floating point arithmetic). Write code that outperforms BLAS code.

Learn BPF (Berkeley Packet Filter) so you can do impossible things about the kernel in User mode. Make your networking class look like it is stone-age.

I could go on but you either 'get it' or you don't.

kaymanb(10000) 2 days ago [-]

I think I fall into the 'don't get it' category. Could you explain what I should be taking away from this?

I understand that being at the leading edge has benefits, but it's not feasible to understand the state-of-the-art across an entire CS curriculum.

Zelphyr(4101) 2 days ago [-]

It sounds like what you're saying is: the best way to hack a test is to actually learn the subject.

Thus, the best way to learn is to be in love with the subject.

songzme(2167) 2 days ago [-]

> I would explain that what makes a startup promising, not just in the eyes of investors but in fact, is growth.

Growth is like cancer. Eventually a company will get big enough to bully others for the sake of growth.

Last night I had dinner with a friend who is building a (yet another) language learning app. Simple fact about language learning, or learning in general: You must practice almost every single day. Even if it is only 5-10 minutes.

PG himself said, if you don't die (aka give up), you will eventually succeed: http://www.paulgraham.com/die.html

To me, the best, most sensical way for learning applications to measure success is to delete user accounts with > 3 days of inactivity. This way you can measure the absolute effectiveness of your program. Students who stick through with your application should 100% become fluent. Target the users who actually want to learn, and measure your success by how many users you made successful.

'NO no no', he says. You'll have no users if you do that.

mattmaroon(2354) 2 days ago [-]

There's a lot of evidence that taking breaks helps with language learning. And there are several other ways to learn languages. I'll sometimes stop using duo lingo but listen to Spanish language podcasts instead for a couple weeks

pavlov(2172) 2 days ago [-]

The tech industry has unfortunately adopted the methodology of centralized hackable tests as the canonical gatekeeping method in the form of programming interviews.

Most big tech companies don't care about how good you have been at delivering some value through creating software: they want to see you deliver a very specific type of performance at a whiteboard. Interviewers are given specific math puzzle questions to ask. Interviewees are explicitly told by the same companies' hiring departments that they should aim to hack the system by studying books like 'Cracking the Code Interview'.

This is an industry that prides itself on supposedly making data-driven decisions through A/B testing. When it comes to hiring people to make those decisions, everybody just plays along to a decades-old script.

johnrob(3852) 2 days ago [-]

As PG implies, the work environment at big companies consists of more hackable tests - so (ironically) that type of interview might be a proper test.

acangiano(201) 2 days ago [-]

I work at a very large company (IBM) and I refuse to interview people that way. My track record for great hires (both full-timers and interns) is arguably impeccable. No whiteboard or brain teaser quizzes involved. I wrote a little about my interviewing approach in my latest 'hiring' post: https://programmingzen.com/new-ibm-internship-positions-in-m...

ericsoderstrom(10000) 1 day ago [-]

> Most big tech companies don't care about how good you have been at delivering some value through creating software: they want to see you deliver a very specific type of performance at a whiteboard

PG specifically points out that success within a large tech company is predicated on bogus hackable tests. So it makes sense that their selection criteria should also be a hackable test. The better you do on that artificial selection criterion for admission, the better you're likely to do on the artificial selection criteria of internal career progression. So in this particular case, the hackable admission test is a good proxy.

kccqzy(3070) 2 days ago [-]

I think testing algorithmic questions is certainly evil, but it's a necessary evil because all other methods are either too time-consuming or unrealistic. You wouldn't believe in a big company how many clearly unqualified applicants there are, and there has to be some quick and dirty rule to run a filter through them. Perhaps it's the aura of FANG companies that make unqualified applicants try because they have nothing to lose. My experience is that just about 1% of interviewees can actually confidently write a depth first search and explain its time complexity. If they can't even do that (which is basically among the first things schools teach in a CS curriculum), how do you trust they that they have the skills to design and implement even more complicated systems?

lacker(1694) 2 days ago [-]

The tech industry has unfortunately adopted the methodology of centralized hackable tests

I think the opposite is the case. Every industry has some sort of test to get into it. But the other industries are the most centralized and hackable ones. The tech industry is doing the best job of any of the top industries of having an application process that is open to skilled people who don't match the 'centralized' standard.

Your grades in college, the experience on your resume, those are 'centralized' tests. You have one resume and you send it out to everyone.

When companies give programming tests to applicants, each company is free to measure the applicants' skill in any way they see fit. One might ask math puzzle questions, but another company might give take-home Rails projects. And companies are free to just do a 'soft' interview and look at your resume if they want.

Facebook hires far more software engineers with no software engineering education than Chevron hires chemical engineers with no chemical engineering education.

Plus, starting a company is open to anyone with a credit card to open an AWS account. No interview required. Just make something people want.

hkmurakami(1885) 2 days ago [-]

I've been curious for a while about the inflection point between the more informal startup programming interview style and the bigco interviewing style, and while there's obviously variability between orgs, whether it comes with headcount, customer profile, fund raising milestones, etc

areyes(10000) 1 day ago [-]

When I started preparing for interviewing, I spent most of my times studying general data structures & algorithms and practicing applying them to different problems. I don't think this is so bad and can lead to a lot of learning you wouldn't do otherwise.

The frustrating thing, is how easy it is to hack the test. After one particular interview, I remember talking to a friend about how hard the technical interview was. He told me that he had already seen the problem and knew it would come up because he bought leetcode premium for the interview. Kind of frustrating to spend hours and hours learning data structures and algorithms when the real key to success is getting lucky and memorizing the problem before hand

7402(3181) 1 day ago [-]

I think I blame Google, because it was started by a couple of graduate students.

I don't know how it was at Stanford, but at Berkeley in Physics there was a series of big honking tests ('Qualifying Exams') that dominated our attention the first two years. You were expected to know all of undergraduate Physics. Fail it, and you're out.

Grad students don't know any better. 'How do we hire the best people? I know - they'll have to pass a big hard test, just like we did.'

Then everyone else goes, 'How do we hire the best people? I know - we'll do it like Google does, they seem pretty successful.'


No, I don't know that it really went like that. I just suspect it.

burlesona(3734) 2 days ago [-]

Being a hiring manager at a big company, I can tell you this is just as frustrating for me as it is for candidates. I hate "leet code" and frankly find algorithmic interviews to be very low signal compared to more practical, open-ended, domain-specific problems.

I will say though, the problem is one of "standardization" across an organization where it's too big for everyone to fit in a room.

Suppose you give each team high autonomy to hire whoever they like using whatever "good" process they come up with. 90% of the time this results in good hires. But as you grow, that ten percent of underperforming people becomes large in absolute numbers, and is very painful to deal with.

It becomes a real problem when relatively lower performing people end up concentrated on a team, and then start being the hiring gatekeepers for that team, thus multiplying the number of lower performing hires.

Later you start having institutional problems when everyone starts to perceive that the engineers in Department A are generally better than the engineers in Department B. Engineers in Department A are more likely to leave if they perceive the company is getting worse at engineering - it becomes a self-fulfilling prophecy.

Then you get enormous pressure to come up with standardized testing - aka algorithms on the whiteboard, or some other academic inspired exercise - imposed by higher level leadership that wants to address a genuine problem (skill disparity across the org) but does not know any better way to do it.

I think, as PG points out, there may be a real opportunity to innovate here, and probably a big financial opportunity if anyone can figure out how to productize and scale a solution.

I struggle to see an easy answer, though. In a utopian universe (for a hiring manager) I'd do something like pay candidates to come on site and work for a week, then make a hire/no-hire decision based on that. But I think that is far too onerous for candidates (and a big company) to have legs.

manca(10000) 2 days ago [-]

This is exactly what I said in my retweet to PG's essay. Having hackable bad tests in the very tech industry proves the point that artificial tests are still the way of thinking for many and something we collectively need to unlearn. The question is how.

dunkelheit(4176) 1 day ago [-]

There are alternatives but they are not necessarily better. E.g. academia seems to rely on references and publications for hiring. This seems closer to your wish of caring about 'how good you have been'. But consider the downsides - it is inherently more insular (to work for a famous professor you need a reference from someone, preferably also a famous professor) and also your thesis advisor can easily ruin your whole career.

Contrast that with our industry where a bright kid out of nowhere can study for the (imperfect, hackable, dehumanizing - I agree!) interview and have a realistic shot at that job at FAANG. At least I wish it still works this way, although of course it helps to be a Stanford graduate!

29athrowaway(10000) 2 days ago [-]

There was a time where computers weren't as fast and libraries were not as high-level/user friendly, and you required to know these things in order to get things done.

Today, unless you are doing things at scale (tiny fraction of startups), you don't need to know how to make things run in the most optimized way possible.

suyash(3616) 2 days ago [-]

Software Engineering interviews are the worst offenders of this "test for grades approach to hire candidates". On that other hand, programmers are spending huge amount of time, money and energy into hacking these coding interviews just to get a job. Situation is horrible and needs immediate disruption.

paul7986(4194) 2 days ago [-]

I loathe all types of coding or design challenges when interviewing! I once designed and coded a five page website.. it took awhile, yet never heard back from the company. Thus, Im going to seek out all opportunities that do not force me to waste my time, especially design as it's subject. Overall you liked my portfolio well enough to consider me then lets chat and see if we jive/im a good fit for you/your team/company!

So far I've been very fortunate that recruiters reach out on LinkedIn and I rarely have to do such time wasting activities as they vouch for me.

Recently dealt with a company who pays parity (everyone makes the same .. no woman, man, etc can negotiate their worth), had five to 6 interviews and 2 to 3 coding/design challenges. WoW I guess they are looking only for a subset of talent who would do all that. People might do that for a FAANg company or when jobs in the field are scarce, but this was no FAANg company and thankfully there is still a good amount of demand!

dominotw(2050) 2 days ago [-]

Haha here is the email i have from my facebook interview thats coming up.

'Your initial Facebook interview is coming up and we want you to ace it!

Here are some tips and resources so you know what to expect and can prepare adequately. Preparing is key.

Our initial interview determines whether to continue with a full series of onsite interviews. This initial interview is primarily a coding interview that will take place between you and a Facebook engineer.

It's important for any engineer to brush up on their interview skills, coding skills and algorithms. Practice coding on a whiteboard or with pen and paper, and time yourself. Preparing increases your odds significantly! Below are two helpful links to a Facebook Interview Prep Course led by Gayle Laakmann McDowell, author of "Cracking the Coding Interview". Use password FB_IPS to access the videos. Cracking the Facebook Coding Interview - The Approach Cracking the Facebook Coding Interview - Problem Walk-Through This article provides much more advice about how to prepare: Preparing for your Software Engineering Interview at Facebook

Here's a sample problem to get started.

Sample Problem Write a function to return if two words are exactly 'one edit' away, where an edit is: Inserting one character anywhere in the word (including at the beginning and end) Removing one character Replacing exactly one character Most importantly, you can view this and other Facebook sample interview questions and solutions here.

Want more sample questions? Try HackerRank, LeetCode, and CodeLab. Be sure to practice questions in a variety of subjects and difficulty levels.'

fhennig(10000) 2 days ago [-]

This is a very interesting thing to think about, I thought about it a bunch of times already, and have a couple of thoughts about it too.

First of all, once you're 55, it's easy to say stuff like that, because you won't be tested anymore. And I think there is a good chunk of survivorship bias: 'Hey, I made it without worrying about tests, so you can, too!' Although that is probably not true for a big chunk of the population, especially once we look at non CS people.

Then I think, he is fundamentally right. Grades shouldn't matter so much, it is about what you learn. And I like to approach things that way too, but in the end, I always have to study for the tests as well.

The problem is exactly how deeply it is ingrained in everything. If I just stop caring, it doesn't really help. I will just end up in a worse position, since everyone around me still cares about test results. The people that need to change their mind are the people that use tests as measures of qualities that they are not a good measure for. I think a lot of people are falling thought the cracks because they don't fit the expectation of a HR person close enough.

In the end, we will always have to have tests. A completely individualized assessment of peoples qualities or fit for certain roles just doesn't scale. It would be great though to come up with new ways of testing that are maybe more in line with actual learning. For example, it would be nice to have tests at university where I can google things, just like in the real world.

I am lucky enough to be able to work in the booming tech sector where there are so many jobs that it is fine if I don't work towards a test.

volume(10000) 2 days ago [-]

> The problem is exactly how deeply it is ingrained in everything

You can you use 'test' as a proxy for the impression you want to make on your boss and peers. Maybe even friends and family.

It's up to you to redefine that test (or create a complementary one) based on your own goals and your own timeline.

tonyedgecombe(3991) 2 days ago [-]

>'Hey, I made it without worrying about tests, so you can, too!'

To be fair he did say 'For me, as for most students, the measurement of what I was learning completely dominated actual learning in college. I was fairly earnest; I was genuinely interested in most of the classes I took, and I worked hard. And yet I worked by far the hardest when I was studying for a test.'

>I think a lot of people are falling thought the cracks because they don't fit the expectation of a HR person close enough.

In retrospect I'm sure this was the case for me. It's probably why I ended up starting my own business.

vntx(10000) 2 days ago [-]

> For example, it would be nice to have tests at university where I can google things, just like in the real world.

One of the best professors I had at university for a Linux course always said: "If you don't know something, google it!"

He backed up his words by allowing googling on the actual coding part of his exams. You had to understand the underlying concepts but you never had to memorize syntax. One time, he had us write a small program in C that involved threading during a proctored timed exam. Everyone ran out of time for that part.

His course and exams were _rigorous_. You could never cram for his class and expect to pass and indeed I know of one student repeatedly failing his course.

This was one of the courses in university where I gained the bulk of my useful knowledge and practical skills I use to this day. I can count the courses that were of similar quality I took at university on one(1) hand. The rest of my degree involved hackable tests, apathetic professors and were massive time-wasters. I learned nothing from them, obviously.

The good professors made me realize how bad the rest of my degree was and how much of a racket higher education could be.

I look back at my undergrad days and wonder how much more I would have learned if I had not focused so much on grades. I'm still actively trying to unschool myself.

AlchemistCamp(3981) 2 days ago [-]

This is great seeing so many PG essays! I've really missed his writing.

> And at elite universities, that means nearly everyone, since someone who didn't care about getting good grades probably wouldn't be there in the first place. The result is that students compete to maximize the difference between learning and getting good grades.

> When I started advising startup founders at Y Combinator, especially young ones, I was puzzled by the way they always seemed to make things overcomplicated.

I suspect part of the problem (of early founders being focused on hacking tests) was related to how heavily YC funded people from elite universities—which select people who do exactly that.

anonytrary(4070) 2 days ago [-]

> This is great seeing so many PG essays! I've really missed his writing.

I love reading his tweets! They're all so down to earth and illuminating. That said, I'm not a fan of his blog writing as much. I find it full of overly-sparse nuggets of wisdom. It needs to be more concise, to the point, with fewer clever metaphors. This article could've been compressed 70% and still have been human readable.

Hendrikto(10000) 2 days ago [-]

> In theory you shouldn't have to prepare for a test in a class any more than you have to prepare for a blood test.

The key here is the "in theory" part. It is very hard to design exams that actually test your understanding of some topic as opposed to your memorization skills.

As somebody who recently got his bachelor's CS degree (currently working on my master's degree), and who also puts more emphasis on actually learning rather that getting good grades, I can tell you that understanding a subject might be enough to pass, but is seldom enough to get good grades.

asdfasgasdgasdg(10000) 2 days ago [-]

This is especially true in a domain that has no practical purpose. In computer science, the 'exam' could be to go solve some hard problem, or write a program with a particular effect. In engineering, you can have students build a bridge and then stress test it. In athletics, the exam is competition. In business, it's making money. In dating, it's finding a mate (or whatever your personal goals are in that space). In farming, it's making food come out of the ground. For writers, it's a compelling or profitable story or book. For painters, it's a painting. Woodworkers can build a chair.

All those domains are testable. But what is the practical work product of a deep knowledge of medieval history? It's extremely difficult to test the past, so it's certainly not predictions about what already happened. Nobody needs knowledge of medieval history for any practical purpose in the modern day. There is no possible way to test for this knowledge in a practical scenario because there is no practical outlet for the knowledge. Literary criticism is the same, along with much that is called 'liberal arts' today. So for subjects like this, exams and essays are the only possible work product, and it's true that at that point you're going to have to use contrived tests, since real tests don't exist.

souterrain(3846) 2 days ago [-]

The free market itself is the ultimate hackable test, no? Our society has examples where superb work goes unrewarded financially.

Does pg argue that they simply should ignore this test grade?

erikerikson(10000) 2 days ago [-]

He doesn't offer help there but...

Yes and the job interview.

brador(3978) 2 days ago [-]

Good grades = Passing well paying jobs resume filter = startup initial capital for 90% of the population without access to the bank of mom and dad or a large inheritance.

PG starting to lose touch with regular joe.

0x445442(10000) 2 days ago [-]

Yeah, I had a similar take. A lot of what he writes seems to come back to keeping the YC pipeline full.

Half way through the essay I was thinking, surely this can't be the first time PG has realized the idea of 'playing the game', which is what I've known it as all my life. But then at the end of the essay he uses that exact same phrase to hit home the point that 'playing the game' is antiquated now if one desires to become rich.

While I'd agree there are more opportunities for individuals or small groups to get rich now than in 1960, for the vast majority of us, the best bet to become rich is to 'play the game'.

whack(1158) 2 days ago [-]

> How does one get lots of users? They had all kinds of ideas about that. They needed to do a big launch that would get them 'exposure.' They needed influential people to talk about them. They even knew they needed to launch on a tuesday, because that's when one gets the most attention.

> No, I would explain, that is not how to get lots of users. The way you get lots of users is to make the product really great. Then people will not only use it but recommend it to their friends, so your growth will be exponential once you get it started.

Ironically enough, if there is one thing I've heard from HN, it is to not believe in the 'if you build it, they will come' myth. That no matter how good a product you build as an engineer, it is all pointless if you aren't willing to hit the ground and start aggressively marketing and selling it. That a mediocre product with great marketing/sales, will win over a better product with poor marketing/sales.

I've heard this directly from YC partners themselves, when they keep telling early-stage founders to market themselves via 'do things that don't scale'. At Reddit, one of YC's early successes, the founders were literally spending their time spamming their own site with fake posts and comments using sock-puppets, in order to create the illusion of activity. As someone who loves building products, I hate the idea of spending time and energy on tactics like that. I'd love to spend all that time and energy on making my product better. But I've learnt grudgingly from YC that product development is pointless unless I'm using guerilla tactics to hawk my product in front of users.

Maybe I'm in the minority, but I've actually had the opposite experience as what PG describes in his essay. Back in school and University, I did study very hard to ace tests. But the way I studied was to genuinely learn and understand the material as well as possible. Not to 'hack' it in some way. Whereas once I graduated, and especially once I tried out entrepreneurship, I realized that just building a great product was insufficient. I now had to 'hustle' and use 'street smarts' and 'growth hacking' in order to get people to notice what I'm doing. The lesson I had to unlearn from school was that the quality of your work will speak for itself, and will win the day. I never needed marketing and sales in school, but it seems indispensable in the real world.

BlueTemplar(10000) 1 day ago [-]

> That no matter how good a product you build as an engineer, it is all pointless if you aren't willing to hit the ground and start aggressively marketing and selling it.

You have pretty much summarized another one of his talks : http://paulgraham.com/ds.html

(Or even two : http://paulgraham.com/schlep.html )

smiley1437(10000) 2 days ago [-]

I love Paul Graham's articles, but isn't this Goodhart's Law but with more words? (When a measure becomes a target, it ceases to be a good measure)

acidburnNSA(3573) 2 days ago [-]

Searched the page for Goodhart and found this. There's added value here as this is a discussion of why Goodhart's Law is bad and how it's so prevalent in our education and business systems. I knew about the law but still got a lot out of this essay.

bkohlmann(1505) 2 days ago [-]

An important corollary is that school and tests teach you not to be wrong. They teach you that incorrect answers will be punished.

Yet, most interesting questions in life don't yet have defined answers. Thus you need to have and test a hypothesis, which will very often be "incorrect" the first time around. But that doesn't actually matter - the mere act of thinking about and defining what an answer could be sets you up to iterate and test. Eventually, you may find product market fit (a - not the - right answer).

You have to be willing to be wrong at first to learn...and win.

robocat(4203) 2 days ago [-]

Even worse, most tests have a well defined question and a correct answer.

In real life you don't know the question, there isn't a single canonical answer, and there is no score-keeper to tell you if you are passing.

qwerty456127(4199) 1 day ago [-]

Good point, but there have been countless amazing products that failed to raise money and/or to attract enough users. Many have even succeeded just to be bought and terminated. Doing the actual job is not enough, in real the real world you are doomed to hack users, investors, laws and many other things anyway.

BlueTemplar(10000) about 24 hours ago [-]

Did he ever pretend otherwise?

Terretta(1702) 1 day ago [-]

> No, I would explain, that is not how to get lots of users. The way you get lots of users is to make the product really great. Then people will not only use it but recommend it to their friends, so your growth will be exponential once you get it started.

> At this point I've told the founders something you'd think would be completely obvious: that they should make a good company by making a good product. And yet their reaction would be something like the reaction many physicists must have had when they first heard about the theory of relativity: a mixture of astonishment at its apparent genius, combined with a suspicion that anything so weird couldn't possibly be right.

The essay context is students and startup founders, but it turns out most multi-billion dollar enterprises have forgotten this as well.

This notion does not come embedded in the heads of most "senior management" from either world.

BlueTemplar(10000) 1 day ago [-]

The point is that big companies have other, easier roads to success.

gnicholas(1393) 1 day ago [-]

I usually find PG's writing to be very illuminating, but I largely disagree with this piece.

I think the reason founders want introductions to influential people is not because they think that this is what you need to be successful. It's because they have seen many cases where influential people talked up startups that seemed to be pretty mediocre, and it helped them succeed. And they think "well my startup is at least as good as those other ones, so publicity by famous people will accelerate our path to success."

They've seen press coverage of lousy startups propel them into fundraising successes and growing revenues, even if they never became profitable. And they think "I'd succeed faster if my legit startup had that kind of exposure."

It's not that they think these things are a substitute for having a good product. They think (correctly) that having these things will accelerate their growth and somewhat lower the bar to success (particularly if there are network effects involved).

ajju(3125) 1 day ago [-]

> They think (correctly) that having these things will accelerate their growth and somewhat lower the bar to success

Incorrectly, unless you define success as a local maxima. You usually can't influence your way to broad user adoption

sinameraji(10000) 1 day ago [-]

Love it. I wanna put this in the context of innovation in education:

IMHO, a radical innovation in education is one that can address at least two of the following flaws and failures in the current education system, at any growable scale:

–Duration and cost –Content –Delivery

If I were to satisfy a VC, I could also add a 4th bullet point called 'Supply/Demand', but since I'm not writing for a VC, then I'll write what actually matters. In the case of education, that is, it's okay to ignore the existing supply and demand, and focus on the demand that must exist that doesn't yet.

Wrote a memo on these and elaborated more https://www.linkedin.com/pulse/q4-2019-what-i-see-current-st...

P.S: I think I got lucky that my mom was a teacher, so I grew up having a definition of right and wrong different from what was expected of me, and that stayed with me throughout my education. I didn't do what was asked, but what I believe was right for my growth.

BlueTemplar(10000) 1 day ago [-]

Considering that 'education' is being used for daycare and conformity first, it might not be 'fixable' when these goals are still requirements...

Wowfunhappy(10000) 2 days ago [-]

> If you merely read good books on medieval history, most of the stuff you learned wouldn't be on the test. It's not good books you want to read, but the lecture notes and assigned reading in this class.

It is the professor's job to decide which parts of medieval history are the most important to learn. Therefore, those are what I need to study.

Maybe not all professors do this effectively—but as long as we're talking about ideals, this is how it should work.

bonoboTP(10000) 2 days ago [-]

Right, I had a similar reaction.

Autodidacts often end up reading a bunch of stuff from here and there with sub-par results. One of the main benefits of a (university) course is focus and guidance. That the prof or teacher selects specific topics they deem important for that stage of education. As a student it can be difficult to judge that yourself.

thepete2(3724) 2 days ago [-]

> And even most of that you can ignore, because you only have to worry about the sort of thing that could turn up as a test question.

watwut(10000) 2 days ago [-]

Yeah, this came across to me as overly simplistic not really thought out argument too. I dont think that college medieval history not being the 'students can pick any part of it and anything you learn about medieval history is good to go' is some kind of massive issue with medieval history course.

It does not mean that you did not learned or that grade does not measure learning. It just means the topic is not whole medieval history (which would be ridiculously broad) but only selected parts of it.

It is kind of like complaining that algorithms 101 are allowing students to pick any algorithms to learn and require everyone to learn the same set of algorithms.

anonytrary(4070) 2 days ago [-]

> No one was pulling all-nighters two weeks into the semester.

As a physics student, I can say we definitely pulled 2am-ers 2 weeks into the semester. Our Condensed Matter professor gave us some of the most random questions that no one could anticipate. To be honest, I was one of those 'diligent' students who got good grades. I'll tell you right now his quizzes were not hackable. The following held true:

  |quiz subject matter| >> |lecture matter|
He tested on the former, so we had no way of memorizing or hacking the quiz. You had to really understand the concepts to do well on his quizzes. Ditto for my grad Quantum Mechanics class.

On the other hand, I've also had professors who give exams pulled from the internet and allow use of the internet while taking those exams. I hacked that immediately during the exam, then almost got in trouble for it.

> ...if the professor tells you that there were three underlying causes of the Schism of 13... you'd better know them.

A professor like my Condensed Matter Physics professor would go on and on about 3 fundamental causes and then quiz students about the 5 fundamental causes, giving a zero to anyone who couldn't think of all the base 3, then grading the remaining students who thought creatively of 2 additional causes against each other on a bell curve. :)

majos(4217) 2 days ago [-]

> A professor like my Condensed Matter Physics professor would go on and on about 3 fundamental causes and then quiz students about the 5 fundamental causes, giving a zero to anyone who couldn't think of all the base 3, then grading the remaining students who thought creatively of 2 additional causes against each other on a bell curve. :)

To me, this sounds like a plausible argument against such tests.

The issue of creatively grading responses to an ill-defined question often pops up in discussion here about interview practices. In those discussions, typically someone will say "I don't do a generic whiteboard interview. Instead I do [idiosyncratic thing x]. It really gives me amazing insight into the candidate."

Then someone else says "Yeah right, a weird test with unclear metrics just gives you a big empty space to fill in with all your biases and pick someone who answers the way you would".

Of course, both sides are exaggerated here. But it's not clear to me that "creative" tests are necessarily any better.

rsp1984(3745) 2 days ago [-]

I completely agree with the the theory, however what 90% or more of us do when they finish university is to go work for an established (i.e. large) company, where winning means hacking bad tests (promotion cycles).

So in some perverse sense school does prepare most of us well for working life, as most of the time doing well in BigCo isn't to actually perform well or to actually make a difference, it's to leave the impression of it at the right time with the people that matter for your next promotion.

throwawaytemp1(10000) 1 day ago [-]

This isn't fair to tests. They are actually far more fair and objective than BigCo promo processes. You can't fake doing a variety of hard math questions the way you can move up the ladder due things like nepotism.

mattbee(4129) 2 days ago [-]

This spends too many words justifying the stale 'real hacker' tradition of dunking on education, and kindof ignores the problem of how to measure something so subjective as learning.

So people who are good at tests get ahead. We get that you're not interested in grades. We all know people who play the game and win, not sincerely engage with their job or education or whatever.

But I'd like to hear from pg / other people in YC how they think they are 'hacked'. What do their successful applicants actually optimise for when 'delight' and even 'growth' are just as subjective as 'learning'? After all they fund lots and lots of companies and founders, many of whom make no returns. How do founders keep their funding 'success' long past the point it was deserved?

dkyc(3159) 2 days ago [-]

'Growth' in the YC definition ('revenue growth with positive unit economics', or the closest proxy to that) seems a pretty objective test to me.

Jgrubb(3439) 2 days ago [-]

There's a passage in Zen and the Art of Motorcycle Maintenance about exactly this.


Phaedrus' argument for the abolition of the degree and grading system produced a nonplussed or negative reaction in all but a few students at first, since it seemed, on first judgment, to destroy the whole University system. One student laid it wide open when she said with complete candor, 'Of course you can't eliminate the degree and grading system. After all, that's what we're here for.'

She spoke the complete truth. The idea that the majority of students attend a university for an education independent of the degree and grades is a little hypocrisy everyone is happier not to expose. Occasionally some students do arrive for an education but rote and the mechanical nature of the institution soon converts them to a less idealistic attitude.

The demonstrator was an argument that elimination of grades and degrees would destroy this hypocrisy. Rather than deal with generalities it dealt with the specific career of an imaginary student who more or less typified what was found in the classroom, a student completely conditioned to work for a grade rather than the knowledge the grade was supposed to represent.

Such a student, the demonstrator hypothesized, would go to his first class, get his assignment and probably do it out of habit. He might go to his second and third as well. But eventually the novelty of the course would wear off and, because his academic life was not his only life, the pressure of other obligations or desires would create circumstances in where he just would not be able to get an assignment in.

Since there was no degree or grading system he would incur no penalty for this. Subsequent lectures which presumed he'd completed the assignment might be a little more difficult to understand, however, and this difficulty, in turn, might weaken his interest to a point where the next assignment, which he would find quite hard, would also be dropped. Again no penalty.

In time his weaker and weaker understanding of what the lectures were about would make it more and more difficult for him to pay attention in class. Eventually he would see that he wasn't learning much; and facing the continual pressure of outside obligations, he would stop studying, feel guilty about this and stop attending class. Again, no penalty would be attached.

But what had happened? The student, with no hard feelings on anybody's part, would have flunked himself out. Good! This is what should have happened. He wasn't there for a real education in the first place and he had no real business there at all. A large amount of money and effort had been saved and there would be no stigma of failure and ruin to haunt him the rest of his life. No bridges had been burned.

The student's biggest problem was a slave mentality which had been built into him by years of carrot-and-whip grading, a mule mentality which said, 'If you won't whip me, I won't work.' He didn't get whipped. He didn't work. And the cart of civilization, which he supposedly was being trained to pull, was just going to have to creak along a little slower without him.

This is a tragedy, however, only if you presume that the cart of civilization, 'the system,' is pulled by mules. This is a common, vocational, 'location' point of view, but it's not the Church attitude.

The Church attitude is that civilization, or 'the system' or 'society' or whatever you want to call it, is best served not by mules but by free men. The purpose of abolishing grades and degrees is not to punish mules or get rid of them but to provide an environment in which that mule can turn into a free man.

The hypothetical student, still a mule, would drift around for a while. He would get another kind of education quite as valuable as the one he'd abandoned, in what used to be called the 'school of hard knocks.' Instead of wasting money and time as a high-status mule, he would now have to get a job as a low-status mule, maybe as a mechanic. Actually his real status would go up. He would be making a contribution for a change. Maybe that's what he would do for the rest of his life. Maybe he'd found his level. But don't count on it.

In time - six months; five years, perhaps - a change could easily begin to take place. He would become less and less satisfied with a kind of dumb, day-to-day shop-work. His creative intelligence, stifled by too much theory and too many grades in college, would now become reawakened by the boredom of the shop. Thousands of hours of frustrating mechanical problems would have made him more interested in machine design. He would like to design machinery himself. He'd think he could do a better job. He would try modifying a few engines, meet with success, look for more success, but feel blocked because he didn't have the theoretical information. He would discover that when before he felt stupid because of his lack of interest in theoretical information, he'd now find a brand of theoretical information which he'd have a lot of respect for, namely, mechanical engineering.

So he would come back to our degreeless and gradeless school, but with a difference. He'd no longer be a grade-motivated person. He'd be a knowledge motivated person. He would need no external pushing to learn. His push would come from inside. He'd be a free man. He wouldn't need a lot of discipline to shape him up. In fact, if the instructors assigned him were slacking on the job he would be likely to shape them up by asking rude questions. He'd be there to learn something, would be paying to learn something and they'd better come up with it.

Motivation of this sort, once it catches hold, is a ferocious force, and in the gradeless, degreeless institution where our student would find himself, he wouldn't stop with rote engineering information. Physics and mathematics would come within his sphere of interest because he'd see he needed them. Metallurgy and electrical engineering would come up for attention. And, in the process of intellectual maturing that these abstract studies gave him, he would be likely to branch out into other theoretical areas that weren't directly related to machines but had become part of a larger goal. This larger goal wouldn't be the imitation of an education in Universities today, glossed over and concealed by grades and degrees that gave the appearance of something happening when, in fact, almost nothing is going on. It would be the real thing.

keithyjohnson(10000) 2 days ago [-]

Is it ironic that I'm going to quote this passage in my grad school admissions essay?

k__(3307) 2 days ago [-]

I never cared about grades, but I was never very good or very bad. Just slightly above average

In academia, having good grades is sometimes helpful to climb the ladder. Not all people can study medicine. Not all people can get a master degree.

Often these gigs are given to the best students only, but sometimes you can circumvent the grading issue by waiting longer, going to the military, be friends with a prof. or working for free.

robocat(4203) 1 day ago [-]

> Not all people can study medicine

I think you are falling for the fallacy that the essay is trying to expose?

Surely those that pass medicine are those that are good at medicine exams. Plenty of people would make great doctors that don't due to the exam system, not due to a lack of capability.

njacobs5074(4221) 2 days ago [-]

That is a great essay. Thank you for sharing.

I've worked in tech for quite a while now. I used to buy into the meritocracy cant but as the years wore on, I realized how much bullshit it was.

I think the biggest moment came when I started to think about how to explain to people in my new home, South Africa, that working in tech would free you economically. There's some truth to that. But that's not the whole truth.

The truth is that all those years ago when I got my first coding job in financial services, I ticked 3 important boxes for the interview: white, male, and university-educated.

i_am_new_here(10000) 2 days ago [-]

You are derailing the discussion and I hope you get downvoted into oblivion

Besides that: The (classical) financial industry (banks and insurances) has a relatively trival IT (of copying data around and displaying it). Combined with external pressure, they are -today- leading on progressive indicators, such as diversity, inclusion, near-shoring and far-shoring.

shrubble(10000) 2 days ago [-]

This makes me wonder if pg has ever read any of the writings of John Taylor Gatto?

Various quotes: https://www.goodreads.com/author/quotes/41319.John_Taylor_Ga...

avindroth(3990) 2 days ago [-]

I feel like he definitely has, he seems like he reads a lot and Gatto is not unpopular for people with thoughts on school

esotericn(4222) 2 days ago [-]

Not sure I agree with this.

The real world is very much about optimizing for and beating tests.

It might be producing a specific CV or preparing for an interview whiteboard session to land a job.

It might be socializing and networking in a specific way in order to land funding.

The real world does not commonly reward just being really good at arbitrary things. It's almost always focused on meeting a need that someone else has defined, much like a test.

xorcist(10000) 2 days ago [-]

That is only true if your life ambition is securing a job with that successful tech company.

If your ambition is being successful on your own, as an academic, an entrepreneur or as an artist, there are no standardized tests to beat.

blondin(10000) 2 days ago [-]

> The real world is very much about optimizing for and beating tests.

thank you.

it's awesome for someone like PG to bring the subject upfront. but what you are saying resonates with me.

genuine interest is missing in our field, and to some extent many others. 'beating tests', as PG puts it, is at all time high. i am at a point where i don't know who or what is right.

the numbers are telling an important story! those who learned to hack the tests or the system are popular online or offline. and they are 'succeeding' in life. they seem to have outnumbered those who put genuine interest first.

for instance, as many of my peers, i wanna learn machine learning and AI. but it is hard for me to find materials that resonate with me. materials that teach from first principles like those that got me hooked back in the days. they are lacking because we are so good at shortcuts and hacking. maybe we don't know how to do it anymore?

but who am i to blame anyone? there are so many, many, ways to hack and beat the tests and the system. and seemingly you can get ahead of many others in life by doing so. #AceYourFirstInterviewAfterBootcamp #TensorflowPyTorch #BeSureToLikeBelow #ThanksToMyPatreons #SubForChatAndEmojis #InstagramFacebookDown

i am very thankful for PG for starting the discussion. and for this comment.

chadcmulligan(3994) 2 days ago [-]

> The real world

It's not the real world - its the human world. The real world is very different, and getting more so imho. Skills to survive the human world are usually useless in the real physical world. It's a distinction I read recently from a philosopher that I lost track of, maybe someone here knows. Her point was (I think) that when someone says the real world, with relation to education, it's not really - it's the world of human systems that are very much open to degradation and corruption, and they can be changed.

matwood(10000) 2 days ago [-]

Agreed. Life is a series of tests. The big difference between life and school, is that it is usually much harder in life to anticipate what is going to be tested. That is where the real skill lies.

When a startup prioritizes feature A over feature B, they are making a decision on what is going to be tested by customers. The startup then optimizes for passing the anticipated test.

buboard(3489) 2 days ago [-]

There is truth to this. It reflects the shift of mentality in tech entrepreneurs since the 2000s (experimentally entrepreneurial, introverted hackers, libertarian) to the 2010s (work for FANG or make a feature that FANG wants to buy, well-curated github and social media, social justice).

I could dare say that this mentality flowed from academia to the greater economy. Academic funding has been generally formalistic and test-driven for decades.

aex(10000) 2 days ago [-]

I've found about how to hack the tests quite early on. I've had straight A's until I no longer cared about the grades. I've also realized that entering and pursuing a career at a tech corporations is a game as well.

What has been driving me to startups is the fact that creating a profitable business has no place for hacks.

Fundraising is a game that could be easily hacked though. I thought it was only possible to hack the fundraising game for a couple of rounds but Adam Neumann et al. changed my mind that hacks could carry a business even to post-IPO.

dunkelheit(4176) 1 day ago [-]

> What has been driving me to startups is the fact that creating a profitable business has no place for hacks.

I feel that this hack/not hack distinction comes from confusion of goals. If your end goal really is creating a profitable business then hack/not hack is not a useful distinction. Everything you do either gets you closer to your goal (then it is good, hack or not) or not (then it is bad, even if it 'feels right'). But suppose your real goal is 'doing most good for society' or whatever and you think that you can achieve this goal by building a profitable business. Then if you e.g. create some bullshit product and market the hell of it (thus achieving your proxy goal), it will feel like a hack that doesn't get you closer to your real goal.

TLDR: hack/not hack distinction arises when there are two goals: a proxy and the real one. A hack is something that gets you closer to your proxy goal but not the real one.

geofft(3275) 2 days ago [-]

> What has been driving me to startups is the fact that creating a profitable business has no place for hacks.

Not sure that's true. Marketing is a hack, for one. If you have the best product but nobody knows about it, it's no use.

More generally, there's a reason that the tech career ladder is hackable: performance reviews are both important and also too expensive to do right. If you want a real evaluation of what I've done over the past year, what you really need is someone following me 40 hours a week, noting how I contributed (positively or negatively) at small meetings, keeping track of whether my projects are late because I helped someone with something truly important or I spent my time on HN, etc. But you can't assign one reviewer per employee, so you make an approximate process where employees self-report what they did and managers report the fraction they've seen. You also can't get rid of the process, because a simple profit motive demands you incentivize employees for actually delivering business value. So you have a process that's vulnerable to exploits like flashy launches that will wither in a year.

If you as a business owner aren't evaluating your employees, you won't be profitable. If you're watching each moment of your employees, you're wasting time. If you spend your time developing a fairer review process, you're not working on your actual business. If you don't hire employees and just put yourself out there in the market, your customers certainly aren't evaluating you fairly. (And of course your employees and potential hires are evaluating you on partial data, and it's in your interest to hire and retain good employees.) So whatever you do, it's hackable, and if you don't play the game you'll forfeit it.

ZeljkoS(402) 2 days ago [-]

Yes, most tests are bad, but there is a quick advice that improves them immensely: never ask questions that are not at least Level 3 in Bloom's taxonomy of knowledge: https://blog.testdome.com/blooms-taxonomy/

I cofounded a technical screening startup, and despite our efforts to educate our customers that enter their questions on our platform, 95% of their custom questions are bad questions. They tend to ask trivia questions, that can quickly be googled, instead of asking work-sample questions (which we at TestDome.com prefer). I think they learned that from years of paper tests in school, it is very hard to unlearn.

Just to give you an idea, for testing web programmers we suggest questions where you need to find bugs in HTML code (https://www.testdome.com/questions/html-css/inspector/17629), while our customers would ask questions like 'What does CSS stand for?'

Too(10000) 1 day ago [-]

Thanks for the article.

The advise against level 5 questions is interesting. I found them to be very useful for answering if a candidate has really applied in the field before, especially comparing two big fields against each other. For example asking if they prefer sql or nosql. Anyone who doesn't answer some variant of 'it depends' has probably not had enough hand on with either of them. But regardless if the candidate has a strong opinion or not, and regardless if it's the same opinion as yours, the answer is going to give you endless amount of openers for dialogue and follow-up questions on how they reached that conclusion and in which projects they used such techniques and how it worked out.

As the articles notes, the questions can easily be misperceived as checking if the candidate has the same opinions as yourself, or you becoming outsmarted by eloquence. Since you are yourself asking and evaluating the questions you have to ensure this doesn't become the case, by putting opinions aside and only asking about fields which you yourself deeply understand. Not sure if it might be off-putting for a candidate to get this type of question though, if they don't understand the deeper purpose of the question? Should you give a disclaimer that you are not actually not interested in their final opinion?

BlueTemplar(10000) 1 day ago [-]

'Why is Pluto no longer classified as a planet?' You've passed the 'categorization' level, but might have actually failed the real 'understanding' level, for which 'We have discovered many large((r)) objects beyond it.' might actually be a more correct answer!

After the discovery of dozens of new trans-neptunian 'planets', categorizing Pluto as a 'real planet', while it probably had more in common with them became problematic.

The best definition that we had to come up with to cleanly separate 'real' planets from the others was therefore quite unlikely to include Pluto!

appleflaxen(3623) 2 days ago [-]

can anybody comment on the difference between 'understanding' and 'applying' in that taxonomy. The examples provided don't seem very good to me.

is it the difference between understanding a language and not speaking it, perhaps? following a conversation, but not being able to participate in it?

buboard(3489) 2 days ago [-]

I can't find whose quote it is that 'the most difficult thing is to teach people to think simply'.

Unfortunately education cannot test that, because tests are by definition testing only a fragment of a system. Knowing and understanding one thing end-to-end is meaningful knowledge, and qualitatively better than knowing fragments of 100 things. Perhaps you can assess the former by grading a diploma thesis on a very specific subject. Tests may have an important role in motivating students to delve more deeply in subjects but they don't go beyond that.

There's similarly more value to knowing some technology 'end-to-end' rather than specializing in only one level and never understanding what's below.

It's also interesting to read this:

> There are now ways to get rich by doing good work, and that's part of the reason people are so much more excited about getting rich than they used to be.

Which may reflect how things were 15 years ago. There were 2 threads in 'Ask HN' last week which basically concluded that these days you're much better off (money wise) working for an uncreative role in FANG.

fhennig(10000) 2 days ago [-]

> Tests may have an important role in motivating students to delve more deeply in subjects but they don't go beyond that.

I feel like the university doesn't allow for time to stroll from the beaten path and explore topics that you're interested in. Most people I talked to have a very stressful studying experience. You know people care about your grade in the end, and you pay a lot of money for your degree. So it makes sense to also spent the majority of your time to learn the course material in and out.

jakobmi(10000) 2 days ago [-]

It's exactly the same with hiring at FANG/McKinsey/BCG/... :). You optimize for the (well known) interview process. You study 10 hours/day for 2 weeks. And you get the job with 99% accuracy.

avindroth(3990) 2 days ago [-]

The artificiality (?) is what gets to me tbh

jpm_sd(3029) 2 days ago [-]

> At this point I've told the founders something you'd think would be completely obvious: that they should make a good company by making a good product.

Sure, ok. How do you make a good product? That definitely is not 'completely obvious'

BlueTemplar(10000) 1 day ago [-]

At least one user is going to love it.

ken(3749) 2 days ago [-]

I'm glad he made the leap from school tests to funding tests, as it seemed to be a thinly veiled analogy from the start. I'm disappointed, though, that he didn't take the final step and admit that money itself is also a poor, hackable test.

Funding, growth, usage -- these are all still one level removed from something worthwhile or beneficial. Cigarettes have amazing usage numbers even in 2019.

I'm waiting for the entrepreneur who will say he doesn't care how much money he makes or how many customers he has, only that it's worthwhile and needed doing.

zamfi(4169) 2 days ago [-]

> I'm waiting for the entrepreneur who will say he doesn't care how much money he makes or how many customers he has, only that it's worthwhile and needed doing.

Closest to this today might be Elon Musk, maybe...but there are other complexities there?

jtbayly(3749) 2 days ago [-]

Every free (as in speech) program is written by somebody that said that. But they are generally not entrepreneurs.

The entrepreneur is by definition somebody that cares about money.

cousin_it(3339) 1 day ago [-]

The real final step is admitting that the 'worthwhile and needs doing' judgment in your head is also a test, which isn't always more reliable than the tests you face in university or the test of money.

StavrosK(586) 2 days ago [-]

> I'm waiting for the entrepreneur who will say he doesn't care how much money he makes or how many customers he has, only that it's worthwhile and needed doing.

This is literally all of open source software. The problem here is that you're waiting for an entrepreneur to say this, which is the wrong kind of person to expect this from.

acidburnNSA(3573) 2 days ago [-]

> I'm waiting for the entrepreneur who will say he doesn't care how much money he makes or how many customers he has, only that it's worthwhile and needed doing.

I believe this is the role of the nonprofit sector.

patkai(4182) 2 days ago [-]

How about DHH? He is the closest to this I know.

hn_throwaway_99(4107) 2 days ago [-]

I think this is an excellent point. I really agree with pg's thesis, but I think it would be worthwhile for him to go more in depth into the 'edge cases' he talks about in passing, mainly because I think these edge cases actually bolster his thesis, not detract from it.

Companies like Theranos and WeWork succeeded for a short while because they tried to 'hack the test'. E.g. Theranos hacked the metrics of 'buzz', 'investment amount', 'notable people on your board', but they didn't actually have a working product. Adam Neumann certainly 'hacked the test' to the tune of a billion plus dollars, but WeWork certainly didn't succeed (I'd note there is still a critical difference between Theranos and WeWork in that customers actually really like the product WeWork provides, the question is just can they provide it profitably).

I think hacking the test can get you so far in the startup world, but eventually reality rears its head and what matters is whether you can provide a product people want, profitability.

One quick note on your cigarettes example. The problem here is that companies are giving people what they want, they've just figured out how to 'hack the test' so to speak with respect to what the human body wants. This is basically the case with all addictive substances - there is essentially a hackable part of the human 'desire' subsystem that people have taken advantage of.

robocat(4203) 2 days ago [-]

You are implying that PG thinks money is all important - but he doesn't think that.

PG: if you had to boil it down to one quality to look for, authenticity would be the most important one. "You're looking for people who are real friends," he said to Chang. "Not just for people who got together for the purposes of this startup. You don't want people who were in it just for the money.

He also talks about how that startup founders that are more interested in the product or the challenge of building a business, make buttloads of cash as a side-effect.

And YC funds nonprofit startups too?

Also from another 2004 PG essay:

"""Money Is Not Wealth

If you want to create wealth, it will help to understand what it is. Wealth is not the same thing as money. [3] Wealth is as old as human history. Far older, in fact; ants have wealth. Money is a comparatively recent invention.

Wealth is the fundamental thing. Wealth is stuff we want: food, clothes, houses, cars, gadgets, travel to interesting places, and so on. You can have wealth without having money. If you had a magic machine that could on command make you a car or cook you dinner or do your laundry, or do anything else you wanted, you wouldn't need money. Whereas if you were in the middle of Antarctica, where there is nothing to buy, it wouldn't matter how much money you had.

Wealth is what you want, not money."""

petra(4060) 2 days ago [-]

I agree.

Also, most small businesses do stuff people want. But it's mostly commodities. So the hard part in many small businesses is marketing.

And marketing is mostly hacking.

As for startups,marketing is very important for a startup.

But is it common to sucseed without something people want, or at least get addicted to ?

fghtr(3387) 1 day ago [-]

Perhaps, Purism (Social Purpose Corporation) could fit: https://www.hostingadvice.com/blog/purism-respects-user-righ...

texasbigdata(10000) 2 days ago [-]

Those metrics reveal something intrinsic though. Take cigarettes.

They're bad, not liked, etc etc. The usage in 2019 reflects something deeper: their effectiveness, perhaps a cohort effect and the difficulty of quitting, and a certain value proposition.

There's a term I'm perhaps misusing which is: revealed preferences.

This statement is perhaps a stretch but maybe what paul meant is that learning history for history's sake is helpful but learning potentially untrue test patterns (aka launching on a tuesday to try to maximize funding) is not.

This might not get a lot of traction but it seems your last sentence describes the Ayn Rand novel where the guy spends years toiling in anonymity versus his peer who is 'fake' and successful, where the value is in the work itself.

daly(4187) 2 days ago [-]

I took a class from John White at UCONN. He gave us a couple dozen CS papers at the start of the year. Every class had 3 paper presentations, 20 minutes each. A random student was picked for each presentation.

So you had to read and understand each of the upcoming 3 papers because there was a random chance you would have to present one of the three.

The result is that you eventually read all of the papers well enough that you could give a 20 minute presentation.

There were no grades. It was pass/fail.

It was one of the best classes I ever took.

bonoboTP(10000) 2 days ago [-]

But the person who presented the very first paper won't have to care about the rest of the semester right? Or can you get randomly picked multiple times?

jimbob45(10000) 2 days ago [-]

I had a high school professor do a somewhat abridged version of that for our final. It was impossible to spend an adequate amount of time on all three so you really had to focus on two and hope you got lucky.

Easily the most stressful final I've ever taken.

daly(4187) 2 days ago [-]

Are you in EECS? Most big data centers use FPGAs 'on the wire' to do things like encryption on the fly or compression so that they get 'wire speedups' without burning CPU cycles.

So FPGA solutions that 'live on the wire' and can be 'plugged into the LAN port' are a really useful area you can hack in your dorm room.

I worked in the security area. One of the key problems is 'exfiltrations' where someone tries to copy valuable files off a server. I created a startup to attack this 'on the wire' with hardware so it can't be hacked. (It failed because nobody knew what an FPGA was, nor how TCP packets worked. Sigh).

The idea is to create crypto-hashes of the valuable files. Hand-install the hashes into an FPGA using a read-only micro card. The FPGA sits on the wire, hashing files being sent. If the hash matches, scream.

THe same idea of 'unhackable' malware virus scans can be done on an FPGA. Only someone with physical access to the FPGA card can modify it. It operates 'at wire speeds'.

The hard part is finding people who can spell FPGA and know what malware means. Be that person.

You could do this in your dorm room.

The side effects are that you learn a LOT very quickly, it is grounded in real code, and you could actually convince YC that your 'homework' is worth a passing grade.

Besides, FPGAs are just plain fun. Make your own hardware Neural Nets. Hack your bike with a camera and an NN to warn of approaching cars.

strictfp(3936) 2 days ago [-]

If you're one of those persons, where would you look for a job?

rbinv(3947) 2 days ago [-]

Wrong thread?

zopine(10000) 2 days ago [-]

TLDR: The way to succeed in school is by gaming the system, but the way to succeed in business is by making a great product.

But PG is wrong. Startups can, and do, game the system. By showing unsustainable, artificial growth with no real value behind it, they can fool investors long enough to make a lucrative exit by IPO or acquisition.

buboard(3489) 2 days ago [-]

an IPO or acquisition without proving their market value should count as startup failure.

guptaneil(2607) 2 days ago [-]

The problem is not that fundraising feels like a test, it's that fundraising is a test posed by authority figures, and by pg's own definition, is probably hackable. How hackable is proportional to the quality of the VC.

Luckily startups themselves are indeed not a test. So you can avoid any test by making enough revenue to not need external funding, but the second best strategy is take the VC's test without trying to hack it as your own test of the investor. Assuming you're a good investment, you'll beat those trying to hack the test unless the investor's test is susceptible to hacking. If you're not a good investment (which is by definition not obvious in the early days), maybe you should just try to learn to hack the test? Ironically, one of YC's biggest value adds is that it helps you hack those tests by lending you their name.

ignoramous(3609) 2 days ago [-]

Paul, though he doesn't reference his previous essay [0] (which has a section on gaming the system) in the current one, did write then that fooling the investors and treating it like a test is only going to delay the inevitable.

[0] http://paulgraham.com/before.html

adwn(3270) 2 days ago [-]

> The way you get lots of users is to make the product really great.

That is wrong – or, at the very least, incomplete. 'Build it and they will come' is the dream and misconception of every programmer who's talented at software development but not at sales. Hell, this is the number one advice of every business how-to book ever: You can't rely on people finding out about your great product by chance, you need to put in the work and sell it!

> Then people will not only use it but recommend it to their friends [...]

That's not how it works for B2B software, and only rarely for B2C. Usually, if a product is successful, then as a result of good marketing (a good product ist somewhat necessary, but not sufficient).

TeMPOraL(2647) 2 days ago [-]

100% right, and the corollary of this is: you don't actually need to make a great product, if your sales&marketing game is good enough.

Or, in other words, a lot of successful business - including startups - is exactly the 'test hacking' PG urges founders to unlearn.

I'm honestly surprised by this essay arguing that 'hacking the test' is the wrong approach in startups/business. 'Hacking the test' is essentially what your marketing is supposed to do. These 'non-authoritarian' tests like selling things are just as hackable as school tests; you just have to discover how the system really operates.

The whole authoritarian/non-authoritarian split doesn't carve reality at the joints, IMO. Football match is really an authoritarian test - it tests who wins under the game rules, which are given from the top. It may be hard to hack, but that's because passing that test is usually synonymous with the terminal goals of the test takers - i.e. they want to win the game. You'll note however that, once money gets involved, people are sometimes made to deliberately lose games; in these scenarios, there are usually other people involved who are absolutely hacking the test.

The best way I've found to carve the reality at the joints is to talk about terminal and instrumental goals. Learning useful things isn't the terminal goal for most students, getting a good career is (and/or not pissing off parents by getting bad grades).

So I'm ultimately surprised that PG argues that building good things, not hacking the test, is how you win the startup game - I would think most of the startups have products as instrumental goals; the exit is the terminal goal. And building a great product isn't the best way to achieve that goal.

jstummbillig(10000) 2 days ago [-]

As an example, can you name a b2b or b2c product that you feel is truly great but overlooked?

jakobegger(3385) 2 days ago [-]

> 'Build it and they will come' is the dream and misconception of every programmer (...)

Well, the amount of marketing required strongly depends on the quality of and demand for your product.

I built a software product and launched it with very very minimal marketing (I emailed a blogger and published it on a mailing list).

People liked my app so much that I had a 1000 downloads within a few weeks (which I consider decent for a somewhat niche app).

Working on that app has since become my full time job, and I do very minimal marketing.

Maybe I could make more revenue if I focussed more on marketing, but to be honest I'm pretty amazed how well word of mouth advertising works.

Gatsky(3645) 2 days ago [-]

Yeah it's a matter of emphasis. When you are starting up, not having a great product is your biggest problem by far. Also your attitude, if widely applied, would result in a world filled with useless crap with brilliant marketing.

going_to_800(4117) 2 days ago [-]

If the product is great(not only the software, but support, vision, market fit, etc), you need a minimum base of users until it takes off, doesn't matter is B2C or B2B, people will talk.

ziadbc(3568) 2 days ago [-]

'Make people want something that is actually junk.' doesn't have quite the same ring to it, nor does it sound remotely more plausible.

Presumably the 10% weekly growth that YC and PG advocate as the default growth metric isn't expected to be in the absence of sales but the contrary, sales plus anything that works that is ethical and not some ponzi scheme.

He provided examples via the link to http://paulgraham.com/ds.html where he describes stripe founders engaging in zealous activities many would call sales despite being the type of startup that could have leaned back and had ample demand.

Historical Discussions: How to fight back against Google AMP as a web user and a web developer (December 05, 2019: 1222 points)

(1230) How to fight back against Google AMP as a web user and a web developer

1230 points 4 days ago by markosaric in 10000th position

markosaric.com | Estimated reading time – 7 minutes | comments | anchor

There's a popular thread on Hacker News with lots of people complaining about how Google AMP (Accelerated Mobile Pages) is ruining their mobile web experience.

This week I also got two AMP links sent to me via Telegram and to see those Google URLs replacing unique domain names made me a bit sad on behalf of the owners of those sites. As a site owner myself, it feels like sovereignty of a website being taken away.

Other than people sharing links with me, I rarely encounter AMP in the wild. It is possible to restrict Google AMP from your life both as a web user and as a web developer. Here's how you can fight back against Google AMP.

  1. Don't use Google search
  2. Don't use the Chrome browser
  3. Don't use AMP on your own sites
  4. Why are so many sites slow in the first place?
  5. Treat the cause: Third-party requests slow down the web
  6. How to make your sites faster than AMP without using AMP

Don't use Google search

Other search engines such as Qwant and DuckDuckGo don't rank AMP sites. So taking the step of switching from Google search to a more ethical choice removes most of the AMP touch points you might have.

It's simple to switch the default search engine in your browser of choice. You can do it in the browser settings directly. Anything other than Google will get you in the no-AMP territory.

But now you might say all those non-AMP websites you visit are full of advertising, distractions and are slow to load? There's a solution for that too.

Don't use the Chrome browser

Firefox is a great browser alternative that is worth a try. Just visiting a site with Firefox's Enhanced Tracking Protection on makes a faster and less intrusive web. It's a built-in blocker of intrusive ads and invisible scripts.

Firefox also has a Reader Mode so any site can be clutter-free even without AMP. And these features work both on Firefox for desktop and Firefox for mobile. Here's how the Reader View looks like in Firefox Preview for Android:

Don't use AMP on your own sites

Publishers and other site owners feel forced to use AMP as they fear that they'll lose Google visibility and traffic without it. These are the forces some publishers cannot resist until more people stop using Google Chrome and search.

You as a site owner or developer are a different case. I like the idea of a faster and distraction-free web but I don't like the idea of web being controlled and molded by one company. Especially not one that is the largest advertising company in the world.

This is the Googled-web Google wants to see you develop. The web "delivered by Google". Your site being integrated with all the other cool Google products such as Analytics and AdSense.

I enjoy visiting sites created by real people. The AMP pages are more boring, less diverse, less competitive, less functional and have less personality.

Why are so many sites slow in the first place?

The main reason AMP exists is that the sites are slow to load. But why are the sites slow to load in the first place? They feature many unnecessary third-party elements that do nothing for the user experience other than slow it all down.

Google themselves will point the finger at their own analytics and ads if you use their webpage speed tests to measure the performance of your site. They even provide guides on how to make third-party resources less slow.

Analytics scripts, advertising scripts, social media scripts and so much more junk. It is normal to visit a site and the majority of it is composed of unnecessary elements that you don't see. This is why the web is so much faster with Firefox's Enhanced Tracking Protection or with an adblocker.

Firefox blocks almost 30 different trackers on a single page of Wired. It also blocks the auto-play of video and audio. This is about 30% of the total page weight. It's important to note that Wired still gets to display their banners for people to subscribe to the magazine.

As a reader, you don't really see any difference at all in the article that you're reading. All this content that they try to load and that is blocked by Firefox is not useful to you.

Treat the cause: Third-party requests slow down the web

Here are some stats:

  • 94% of sites include at least one third-party resource
  • 76% issue a request to an analytics domain
  • 56% include at least one ad resource
  • Median page requests content from 9 unique third-party domains
  • Google owns 7 of the top 10 most popular third-party calls

And what are these calls that Google owns? They're things such as Google Analytics, Google Fonts and Google's DoubleClick advertising scripts.

Most popular third-party requests

So you can see why there must be some kind of internal struggle at Google. They understand the value of a faster web but they also cannot go after the main cause of the slow web. And this is how technology such as AMP gets invented and makes things worse.

We should be treating the cause of this slow web disease instead.

How to make your sites faster than AMP without using AMP

It's possible to make your site faster than an AMP site without using AMP. You need to put the speed as the priority when developing.

  • Restrict unnecessary elements. Understand every request your site is making and consider how useful they are. Do those flashing and distracting calls-to-action actually make a difference to the goals you have or are they simply annoying 99% of people that visit your site? Do you really need auto-playing videos?
  • Restrict third-party connections and scripts. Do you actually need Google fonts? Do you need the official social media share buttons? Do you need to collect all that behavioral data that you may never look at? There are better and lighter solutions for each of these.
  • Lazy load images and videos. There's simply no reason to load your full page and everything on it as soon as a visitor enters your site. Lazy loading only loads images in the browser's view and the rest only as the visitor scrolls down the page.

By doing this your original site will load faster than the AMP sites. And the web experience will be better, more open and more diverse to everyone.

I also tweet about things that I care about, think about and work on. If you'd like to hear more, do follow me or add your email to my occasional newsletter.

All Comments: [-] | anchor

fouc(4053) 4 days ago [-]

Another way to fight back against Google in general is to stop using Chrome and stop using google search.

Also stop using auto-update. Promote a diversity of browsers and browser versions.

illnewsthat(10000) 4 days ago [-]

I don't think recommending people to stop using automatic updates is a good thing because it likely means users will not receive security updates.

doublerabbit(10000) 4 days ago [-]

Also stop them from buying android devices.

Good lucks with that.

ctingom(1456) 4 days ago [-]

Also, Bing is really good! I'm the only guy in the company that uses it, but seriously I love it!

lern_too_spel(4220) 4 days ago [-]

Bing also uses AMP. That might be why you love it.

deminature(4123) 4 days ago [-]

Probably highly unpopular opinion, but as a user I've never had anything but positive experiences with AMP-enabled sites. They load massively faster than normal sites, especially on poor mobile connections where main sites sometimes hang indefinitely trying to load javascript, ads, etc.

While content publishers are continuing to overload their sites with further trackers, ads, javascript, remotely loading assets which slow down performance, AMP seems like one of the few counterbalances and is pro-user, even if Google's endgame is self-enrichment rather than benevolence.

Content publishers could easily fight back by independently improving their own performance and not forcing mobile users to suck down megabytes of trackers on shaky connections, but they seem to be choosing not to.

millstone(10000) 4 days ago [-]

Have you used AMP reddit?

kccqzy(3070) 4 days ago [-]

> They load massively faster than normal sites, especially on poor mobile connections where main sites sometimes hang indefinitely trying to load javascript, ads, etc.

Have you tried the normal mobile websites with an adblocker?

jimrandomh(4125) 4 days ago [-]

My experience with AMP, immediately before seeing this article:

1. On desktop, I clicked through a link on Facebook, leading to an AMP page

2. The page was clearly meant for mobile, and looked bad on desktop; the images were full-screen size, the font was too big, and the text line length went all the way to the edges of my very wide browser window.

3. I used Ctrl+Minus to adjust the zoom, which fixed the font size but not the images or the line length.

4. I looked at the top and bottom of the page for a 'desktop site' link, and couldn't find one.

5. I looked at the address bar, and saw that the URL was an AMP URL. This is the first time I have noticed that I am using AMP in more than a month.

6. I closed the tab and went to HN, where this was the top article.

When AMP works well, it's inconspicuous, so it's not so surprising that most of my remembered experiences with it are negative. Still, I think google needs to invest a bit more in preventing this sort of bad experience, because currently it comes across as 'google breaking the web'.

gregable(3914) 4 days ago [-]

Every valid AMP page includes a <link rel=canonical href='...'> to the canonical URL for the document. If the aggregator (facebook in this case) parsed and linked to the canonical as the publisher recommends via this annotation, you would get the version the publisher preferred. This is how browser extensions that rewrite to the non-amp version work, they extract this URL.

The AMP viewer iframe share button (and share intents) all share this canonical URL, not the AMP url. Google's implementation is trying it's best to get you to that version as well when sharing links.

Link Rel Canonical is an old (2012) standard: https://tools.ietf.org/html/rfc6596

rdiddly(10000) 4 days ago [-]

Great post, although I'm a little meh on lazy-loading images. I like when the whole page is finished loading the moment I think it's finished loading. But more to the point, lazy-loading can be a crutch just like AMP is, for solving problems that shouldn't exist, such as: your page is big & bloated. If you keep it small, there's no need for lazy-loading. But you have to limit the number of images, and optimize the ones that are there. And probably only one video per page. Horrors! Obviously none of this works when the page is effectively infinite in size - such as when you're trying to give the user the addictive excitement of scrolling through a continuous, visually-rich 'feed'.

account42(10000) 3 days ago [-]

Yeah, please don't add lazy loading.

- It breaks viewing the site with javascript disabled.

- It prevents viewing the site offline (e.g. on an airplane) without first scrolling through the whole content while online.

- Unless implemented perfectly it adds delays before content you have scrolled to becomes visible.

Deciding when to load stuff should be the browser's job. Don't reimplement the browser in javascript.

Fr0styMatt88(2642) 4 days ago [-]

The most annoying thing for me, as a vision-impaired user, is that AMP pages disable zoom.

I know that can be overridden in Chrome's accessibility settings, but it's a shitty practice that something like AMP shouldn't be promoting.

rpmisms(10000) 3 days ago [-]

I think the idea is that you'll either use a screen reader or use the system zoom tool. Not defending it, but I can see the logic they used.

rland(10000) 4 days ago [-]

Imagine if Cox or Comcast came up with AMP. Same technology, same idea.

If websites use the Comcast AMP framework, Comcast will cache their sites and make them faster for users. See, it's about the users! Because the Comcast AMP framework is open, and has nothing to do with the business interests of Comcast. Comcast will give a bit to open source and have a few conferences per year to make sure developers know that it's not all about the company.

I believe the fight against AMP will not be won by users or individual action -- it will be won with legislation. What are users going to do? Plain HTML pages are on the 10,000th page, below a hundred thousand AMP tracker loaded piles of shit. It is a de-facto content restriction.

Even if I use an alternate search engine -- it's results are polluted by those of Google, because 90% of search is Google. We do not have a choice.

I'm sure a Google lawyer will successfully argue that I can recieve IP addresses in the mail via U.S. post for any odd, esoteric plain text HTML pages I'd like to visit, though.

bduerst(10000) 4 days ago [-]

Comcast wouldn't do that because Comcast already flat out charges publishers like Netflix to not be deprioritized on their network. It would be analogous to Google charging publishers for top organic search positions.

dwild(10000) 4 days ago [-]

> If websites use the Comcast AMP framework, Comcast will cache their sites and make them faster for users.

They probably do it with Netflix, some ISP do it with Steam too... there's nothing wrong with multilevel cache (except cache invalidation).

maxaf(4122) 4 days ago [-]

> Don't use Google search.

This is easier said than done, as the other search engines are still not as good as Google, even though Google's results have been getting worse. This may be a controversial opinion, but it's not what my comment is about. I'm going to make a more scandalous suggestion.

Don't use search engines at all.

The idea that a centralized one-size-fits-all search engine is necessary is preposterous. The Web makes available all kinds of information, and unifying it all under a single data model is difficult, and doesn't even make sense. (Does anyone remember the semantic web?) Unifying the world's information behind a single search facade is likewise a Very Difficult Task (TM), one that's likely to fall into the trap of big business, as search has done, because the required resources are so huge.

But what if it's solving the wrong problem? Information of a particular type tends to gravitate to local centers of storage, so to speak, which are specific to the type of information being stored. For example:

- Encyclopedic knowledge is in Wikipedia.

- You can find places by searching Foursquare, Yelp, Apple Maps, OSM, ...

- Q&A about programming (and lots of other topics) is on StackExchange.

- News aggregators have been beaten to death, and multiple are available.

- You can search Twitter using Twitter, and Facebook using Facebook.

I can go on, but the point is clear: every single Web-connected system offers a search function of its own, one that's likely specialized to the type of information stored in that system. It'll most certainly do a better job at searching that local store, and will do so more quickly and cheaply than a centralized, generic search engine. This also avoids the moral hazard of search centralization.

This leaves the little guy: the random small website or blog, where the majority of true gems are found. Google locates these by sheer brute force: they literally index the entire web. They've taken a relative eternity to do so, but it's a problem that could have been solved by something better than mere force.

Does anyone remember webrings? https://en.wikipedia.org/wiki/Webring What if 'the little guys' organized in webrings and directories? This doesn't seem like a technical problem, as a webring or directory is trivial to build. Could this be a UX problem that hasn't been solved to the satisfaction of a modern Web user? Is anyone or anything taking another stab at this?

In closing, I'll throw out one last vague notion: that of an openly federated search. How cool would that be? We don't need Google for that at all.

mellavora(3875) 4 days ago [-]

And as far as the 'little guy' goes, there is plenty of research on network science which shows that the (original) Page Rank algorithm does not provide meaningful rankings for anything except big guys. The algo accurately scores/ranks the most connected web pages, but scores for the 'long tails' which make up 95% of the web are pretty close to random.

donohoe(154) 4 days ago [-]

Try: https://www.startpage.com/

It uses Google search, but privacy on level of DuckDuckGo.

a3n(3313) 4 days ago [-]

The web is large, and my processor is so very, very small.

fhennig(10000) 4 days ago [-]

I like the idea, but I often find search results on the sites themselves crude and lacking. For example the wikipedia or reddit search, both are not so great.

And what about if I want to search for a band, and I don't know if they have a website, a facebook page, a soundcloud page or if they have most of their material on youtube. Do I manually need to search and check and compare different platforms until I find the one where the band chose to host their content?

mlok(10000) 4 days ago [-]

!bangs in DuckDuckGo enables you to do searches right on the destination website


donohoe(154) 4 days ago [-]

I would like to point out that it is possible to have web pages that load faster than AMP. It has not been made easy but many publishers have figured out (in some cases publishers have web pages that load faster than their AMP ones...)

Take a look: https://webperf.xyz

I have a number of issues with AMP but I will just mention two:

1. If Google addressed how their ad system was being mis-used (and in many respects as-intended) that would have gone a long way to addressing webpage performance. Instead they pushed more work on the publisher to adopt yet another new format (add it to Facebook Instant Articles, Apple News JSON formats, Google News MediaRSS etc.)

2. AMP helped killed some early momentum to make pages faster. They sold a bandaid solution that was 'good enough' for management and undercut engineering efforts to address the root cause.

52-6F-62(3611) 4 days ago [-]

And in the process introduced a whole new layer of cruft and the number of bugs I hear from the web teams trying to implement content with AMP is... it's constant.

exabrial(4092) 4 days ago [-]

I remember looking at the original HTML5 spec and going, there is no way lower power / low bandwidth / high latency devices will be able to handle this efficiently. Nevertheless, we moved fast and broke things, ratified the HTML5 spec and paved the cowpaths without a second thought to language or efficiency.

Somewhere along the line, things came full circle. HTML was slow again, so we needed a new new way to efficiently render content, thus AMP was born.

account42(10000) 3 days ago [-]

In what way is HTML5 is inherently slower than previous versions? Pages tend to be slow because of the tracking and adds addded to them as well as client-side 'rendering' for static content. None of that is required by HTML5.

ossworkerrights(10000) 4 days ago [-]

I really doubt google will become 'irrelevant' anytime soon. What most of us nerds forget is that tech products become popular when they solve real life issues for non technical people. We can sit here and debate all day what protocols are better, and which browse is cooler, and why Google is evil, because it will not become irrelevant anytime soon unless there is a better end user product. Happy bashing Google, Facebook, Amazon, etc because no one cares, really. What they care is 'how to cook potato soup' showing relevant results, and having their pictures generate likes and their unneeded products delivered tomorrow.

hknd(4170) 4 days ago [-]

And that's exactly the reason AMP is so popular with end-users. It's fast, and opens instantly on slow networks with your phone. End-users don't know what AMP is, and most don't know that the lightning symbol is AMP. What they know is clicking it gives them fast results, and that's what 99% of people care about.

PavlovsCat(4194) 4 days ago [-]

> Happy bashing Google, Facebook, Amazon, etc because no one cares, really.

I'd rather work with 10000 people on something we can get excited about because it's actually cool and 'pure' at least in intent, if not in implementation, than with 10 billion people on the current trajectory of chained prisoners shuffling down a damp hallway. You're boring as fuck.

lazyjones(4140) 4 days ago [-]

> it will not become irrelevant anytime soon unless there is a better end user product.

Most of the end users today never made a conscious choice about this, in the same way most people didn't question Windows as 'the operating system' years ago. Google is pre-configured almost everywhere, its brand name is synonymous with web search and most people never even try anything else. IOW, it's not the superior quality of the end product that decides the market share today, it's sufficient quality and no good reason to switch.

christiansakai(10000) 4 days ago [-]

Serious question, what is a free alternative to Google font?

commoner(3085) 4 days ago [-]

You can still use Google Fonts, but download the font files and host them yourself to prevent your users/visitors from being tracked by Google:



macinjosh(3885) 4 days ago [-]

I don't understand how Google's AMP business strategies are dissimilar to what Microsoft got in trouble for with Internet Explorer. Would be interested in what someone who is knowledgeable on that topic has to say.

rossdavidh(4125) 4 days ago [-]

One could argue, that it never really got Microsoft in _that_ much trouble. They retained Internet Explorer, and really all the requirements put on them by the U.S. and EU combined didn't amount to much.

Not saying it _shouldn't_ have gotten Microsoft into that much trouble, just that it did not.

jimmar(10000) 4 days ago [-]

As a user, AMP gives me a better experience than non-AMP search results. It's a tough sell to tell people like me that I should prefer the worse product (the slow, tracking infested, bloated full website).

millstone(10000) 4 days ago [-]

As an iPhone user I find AMP to be painfully buggy. Rotation doesn't work properly, the URL bar doesn't hide properly, reader mode is routinely broken, pinch to zoom doesn't work, etc. I wish I could disable it.

justinph(2787) 4 days ago [-]

I agree with all of this, but until google provides a way to appear in the discover box without having AMP-published pages, this is a non-starter for publishers. Ironically, by creating AMP, google has disincentivized publishers from making their canonical pages faster.

Publishers hate that google holds them hostage with AMP in this manner, but the situation is what it is, until someone from the Justice Department starts making the lords of Mountain View antsy.

noelsusman(10000) 4 days ago [-]

>Ironically, by creating AMP, google has disincentivized publishers from making their canonical pages faster.

In theory, maybe, but I think history has clearly demonstrated that publishers will not make their pages faster if AMP didn't exist. That's why AMP has been so successful in the first place.

mcv(4213) 4 days ago [-]

I think the fact that Google gives higher ranking in their search results to sites that use Google technologies should be addressed in anti-trust investigations.

shadowgovt(10000) 4 days ago [-]

'How to make your sites faster than AMP without using AMP' leaves out 'locally cache a copy of your site in a CDN that is geographically close to your users.' Which is the actual mechanical part of AMP that makes it technologically interesting / valuable to content providers and countries distant from the creation of most content.

deminature(4123) 4 days ago [-]

This technology has been available to anyone via Akamai, CloudFront and other edge caching networks for years. It shouldn't require Google threatening AMP to get content publishers to adopt it.

justinph(2787) 4 days ago [-]

It also omits the part where google starts preloading and rendering the AMP page when the user is still on the search results. Without that, AMP is often no faster than many non-amp pages.

ogre_codes(10000) 4 days ago [-]

I'm not a fan of Google's proprietary web, but it's worth pointing out that this is largely a response to the increasingly shitty way publishers treat their users. Just reading basic articles on the web has become a painful exercise in dodging 'Subscribe' faux-pop-ups; trying to scan text while your vision is bombarded with unrelated video; and user-hostile scroll capture effects.

For much the same reasons Google AMP is a thing, I use Apple News for most of my news reading. The web has overcome commercial broadcast television as being the shittiest way of consuming content.

neop1x(10000) 3 days ago [-]

Why using proprietary Apple News when there are so many news aggregators and even RSS readers? Isn't it just laziness?

tomComb(10000) 3 days ago [-]

It's worth noting that Apple News take a 50% cut of the revenue. On the web, a publisher is free to do whatever they want so when Google can insert themselves into that, it is a fraction of the cut that Apple takes.

It's the sort of thing that reminds me why I want the web platform to remain competitive with iOS, Facebook, and Android. If not AMP, something like it was sorely needed.

sintaxi(3602) 4 days ago [-]

Also worth pointing out Google can and does rank websites by any criteria they choose. So if the top ranking sites have poor usability that is on Google in the first place.

IanSanders(4100) 3 days ago [-]

I started blacklisting hostile websites and encouraging people to do the same.

Pigo(10000) 4 days ago [-]

I guess because I work on mostly b2b or private apps, I'm not up on the latest for sites that are trying to drive traffic and views. SEO has always seemed like a dirty world of stepping on whatever you have to just to get clicks, and be the 500th app someone grants notification permissions to.

gkolli(10000) 4 days ago [-]

Sorry for the dumb question, but:

What does Google gain with AMP? How does it make money with it?

RandallBrown(3747) 4 days ago [-]

Google makes money by people browsing the Internet. If they can make browsing easier/better/faster, they will do it.

This is why they built Chrome and Android too.

yellowarchangel(10000) 4 days ago [-]

Google is trying to monopolize the internet, and they also sell data / ads. So inherently 'spending more time in google ecosystem' leads to inherent value.

brianzelip(4064) 4 days ago [-]

Here's one great takeaway:

> Treat the cause: Third-party requests slow down the web

> ...

> - Google owns 7 of the top 10 most popular third-party calls

> ...

> So you can see why there must be some kind of internal struggle at Google. They understand the value of a faster web but they also cannot go after the main cause of the slow web. And this is how technology such as AMP gets invented and makes things worse.

It blows my mind how many devs around here are devoted to their browser and search.

Stop using chrome. Honestly, wtf?! Firefox is awesome. FF dev tools are awesome. FF, like Wu Tang, is for the kids.

STOP USING google SEARCH! USE DUCKDUCKGO! Use the `!gm` google maps bang when you need it. Use the `!g` google bang in a pinch, but for all of our sake, please wean yourself off of google search.

These two steps are immensely easy to do, and yet a MAJOR investment in all of our future.

DCKing(3905) 4 days ago [-]


I want to stop using Google, but please realize that DuckDuckGo is only competitive with Google if English is your only language (maybe even only if you're American?). There are loads and loads of people on HN for whom DDG is a poor experience.

In my experience DDG is extremely poor for localized results, especially those in other languages than English. Previously I recommended StartPage.com for my fellow Europeans, but StartPage has been bought by a shady company [1] and should not be used anymore either. I have no recommendation anymore.

[1]: https://reclaimthenet.org/startpage-buyout-ad-tech-company/

atomi(10000) 4 days ago [-]

This is just a sliver of the perversion created when you rely on advertising for profits. Everyone thinks free stuff is great but it means you're the product and an entire global economic ecosystem is being built on that.

merpnderp(10000) 4 days ago [-]

I switched to DDG many years ago and stopped needing to use Google search for anything maybe 2 years ago. The every day experience of using DDG is so much more enjoyable than Google - made clear every blue moon when I do run a Google search and see all the ads, poorly placed results, and feel the heavy weight of Google's invisible hand.

deusofnull(10000) 4 days ago [-]

one of the most trivial annoying problems with switching between the two is the difference in how bookmarking works in FF vs chrome.

I have to use chrome for work cuz of a security rule of questionable legitimacy (to me).

in FF, you bookmark a page by dragging the tab into a folder.

in chrome afaik, you have to star it and then select what folder it goes into.


Another thing, In chrome, if you context click inside of a bookmarks folder and create a new folder, it creates a nested new folder within the folder you clicked.

In firefox, thats not the case, wherever you click, i believe it creates the new folder at the top level, and then you can drag it to a sub folder. Its a small problem but if you're like me with lots of bookmarks[1], it is a pain in the ass.

[1] for example, i have a folder for every year, with a subfolder for each month, and whenever i find non-specific cool stuff, i toss the bookmark in this month/year folder. at this point i have these going back to 2016, and its become a cool sort of journal or scrapbook type thing. highly recommend this practice to ppl, and its basically the use-case of many 3rd party browser adons for bookmark management, but built into the browser by default.

anyway, long like firefox!

Mirioron(10000) 4 days ago [-]

>Stop using chrome. Honestly, wtf?! Firefox is awesome.

I think plenty of people have found Mozilla to have made some poor decisions over the years. It feels as though you have to choose between two options that aren't great, so is there a point in switching? Take the add ons situation. At least on Chrome I can run my own add ons.

reificator(10000) 4 days ago [-]

> FF dev tools are awesome.

I switched back about a year or so ago. There are some nice things in the FF dev tools compared to Chrome, but on the whole, even a year later I still find myself opening Chrome to do something specific here and there in dev tools.

entelechy0(4187) 4 days ago [-]

Thoughts on Brave?

stiray(4105) 4 days ago [-]

After 6-7 years of not using google, I can vouch for this. There is just no reason to use it (I am writting this on HN, you are not the case of 'just another user' by just beeing here). Their search has become mediocre due to all adwords and there was never as simple to put together a nextcloud/dovecot/postfix/searx powered server. Once I have tryed to block facebook, google, amazon ip ranges using scraped data from ASNs. It is increadible what a large piece of internet those companies have took over for their own profit. Even duckduckgo wont run (probably hosted on microsoft/amazon/google cloud), yahoo redirected me to GDPR 'consent' page, which was again on microsoft/amazon/google cloud. We really dont want to live in a world ruled by their advertising engines. Fight back, block ads, on premise everything, remove google/fb/... spyware from your roms and stop pretending you have friends by number of likes on facebook. Just go away from this or we will end in corporation driven distopia you really dont want to live in.

And read this book: https://en.wikipedia.org/wiki/Surveillance_capitalism

It is just too late for fashion, no time for fanboyism, we blew it. We are on limit of destroying freedom of internet (I will post facebook, google and amazon ASN based ip ranges later when I come home, so you can try it on your own, it is really hilarious, from all search engines I know, only yandex was still operational. And those are RIPE records.).

cletus(3080) 4 days ago [-]

> It blows my mind how many devs around here are devoted to their browser and search.

Are they though? Or is AMP just a storm-in-a-teacup? There a handful of people on Earth that will stop using Google search (in particular) and Chrome as a protest against AMP.

> ... and yet a MAJOR investment in all of our future.

[citation needed]

Personalized search is not inherently bad. Use Google search without being logged in if you really care. Harder to do with Chrome but Chrome is actually a great user experience. It still blows my mind that FF prompts me to restart to install an update when I start it up.

No I don't want to install an update. I want to use the browser. That's why I opened it. How many years has it been since Chrome added auto-update on restart with silent updates?

Beyond Search, GMail, Maps and Chrome though I just won't use another Google service and take the risk that some automated system will decide my account is in violation of some ToS no one has ever read and shut down access to every single Google service I use.

Whoever signed off on doing this through the Google Plus boondoggle needs to be fired (although, to be fair, I think ad accounts getting suspended arbitrarily and in some cases incorrectly was already a thing by then).

40four(4222) 4 days ago [-]

Preach brother preach!

I made this move myself many months ago & never looked back. There is basically nothing I miss about dropping chrome/ google search. My web experience is indiscernible using Firefox & DDG.

One of the only minor things I've had to adjust to is prefixing searches with my local city & state when I'm searching for things in my hometown (Imagine that! My search engine website isn't tracking my physical location!). A small price to pay for a HUGE gain in privacy & peace of mind, knowing I'm no longer being exploited and rolled up into a package, sold to advertisers.

I got so sick of seeing targeted ads in EVERY website I visit. Something I searched the day before. Maybe a website I visited a few days ago. Then I have to look at ads for those things for the next week straight!? No thanks.

After making two super easy changes, Chrome -> Firefox & Google search -> DDG, I almost NEVER see creepy, annoying targeted ads anymore. It's amazing!

idoubtit(10000) 4 days ago [-]

> Stop using chrome. Honestly, wtf?! Firefox is awesome.

I do not use Chrome, but I don't think Firefox is awesome. While I use FF, partly for moral reasons, my experience is average and I saw many flaws and bugs. I prefer the features of Vivaldi, which is based on Chromium, but does not phone home to Google.


While my default search engine is Duck Duck Go, I don't recommend it to friends and relatives. My experience is that searching in any language but English gives very poor results. Other patterns also need to be redirected to another search engine, so it's not a a smooth migration.

My estimates are that 50% of my searches are for DDG, 25% for Google, and 25% for others (Bing, Qwant), not counting specialized searches like Wikipedia. That is enough to prevent being profiled.

pduan(4185) 4 days ago [-]

I tried DDG for a day on my phone. I follow stocks and will often enter '[ticker] stock' and Google search shows me that nice chart at the top.

DDG doesn't have anything like that and forces you to click into a financial website.

I had to switch back. If it added that widget, I'd probably make the switch full time.

pbreit(2538) 4 days ago [-]

Firefox's lousy 'profile' handling is still a show-stopper for me.

LoSboccacc(4201) 4 days ago [-]

> Firefox is awesome.

it is doing great now, but lost a great deal of marketshare precisely because it was a slow, bloated mess

> FF dev tools are awesome.

debatable. chrome hot code replace in debug is a great asset. mounting a local workspace to synchronize changes is situational, but when you can leverage it it's great. firefox code view's 'find in files' only search linked javascript, chrome search everywhere so it can catches references in inlined javascript

I work with both regularly, for personal usage I prefer firefox, for the obvious implications, but I find a lot less friction in developing using chrome.

rising-sky(1902) 4 days ago [-]

For those concerned about switching from Google to DuckDuckGo, maybe this article might aid in convincing you

I ditched Google for DuckDuckGo. Here's why you should too https://www.wired.co.uk/article/duckduckgo-google-alternativ...

MadWombat(10000) 4 days ago [-]

> Use the `!gm` google maps bang when you need it

That is not how it works for me. One of common things I might do is look up a place, like a bar or a restaurant or theater or whatever. When I do this in Google, I immediately get a link to the website, a link to the reviews, pictures of inside and outside and a link to the map. Right at the top of the page. With DDG, all I get is a link to the website if the place has one and possibly a link to a yelp page. I have to do multiple additional searches and then cut and paste the address into a separate tab to open the map to get the same result.

Kaiyou(10000) 4 days ago [-]

What's the point of switching to DDG, though? Under the hood it's just a combination of Google and Bing, isn't it?

latexr(4134) 4 days ago [-]

> Firefox is awesome.

Not if you're a macOS user, in particular one that cares for automation. The lack of AppleScript (the bug report that tracks it is old enough to vote) prevents people from considering it as a daily browser. And I'm not just talking about developers. I regularly steer non-technical people away from Firefox because when they ask why my tools—which they want to use—don't work on Firefox, I have to tell them the truth: I'd like to support Firefox, but I can't.

yread(248) 4 days ago [-]

I would say also STOP using Google Analytics, Google CDNs, Google fonts, Youtube for hosting your content and all the spying 3rd party services. It's perfectly doable to have a commercial website without them.

neillyons(10000) 4 days ago [-]

I've been using Safari and Duck Duck Go for the past six months. I've also moved my email to Fastmail. Happy to become less reliant on Google.

nift(10000) 4 days ago [-]

Do you know if you use !g whether or not the request routes through DDG (not removing tracking, but hopefully reduce ir(?)) or it's like going on Google.com?

I use DDG and 'bang' myself to better results if DDG fails me. Love the amount of features.

My only complaint is it sucks at helping me spell words, especially in other languages than English.

Sadly we use Google mail at my work so probably can't avoid it completely.

jypepin(3807) 4 days ago [-]

I switch to FF a few weeks ago and (this time!) haven't looked back. It's as good if not better. I had tried to switch multiple times a year for a few years, and I think now is a good time, finally.

I was going to switch to duckduckgo as you say, to realize that OH! I had switched to duckduckgo when switching to FF but had forgotten... that's... how easy and seamless it is now, I even just forgot I was not searching on google anymore.

Avamander(10000) 4 days ago [-]

DDG is worse than Google for me and totally useless for any searches in my native language.

Firefox has 50% worse performance for me in WebGL applications. Media isn't also accelerated still on Linux. Where is the 'awesomeness' in that? I really wish those caveats didn't exist for what I use daily.

larusso(10000) 4 days ago [-]

I never used chrome because I saw no reason to switch away from Firefox. I use Firefox since 2004ish. My move to DuckDuckGo came together with my decision to delete my FAcebook profile. I use Firefox containers to keep amazon and my work google apps in check. I'm very happy with this setup.

nailer(414) 4 days ago [-]

> Stop using chrome.

FF is great, but it's also worth pointing out that Edge has all the Google adware removed and better privacy. There was a post from Eric Lawrence detailing all the Google stuff they removed a little while back, it was about 25 separate components. There's telemetry though so make your own mind if you're cool with that.

golergka(2603) 4 days ago [-]

I have switched to DuckDuckGo as mt main search engine a couple of months ago, but for complicated queries I just have to go back to Google. It's that bad.

wannabag(10000) 4 days ago [-]

Sometimes, I don't agree with n-gate and just feel home in the comment section of HN... Thanks

The rest of the time I stay away...

syshum(10000) 4 days ago [-]

While I agree we need to stop using chrome, I don't know that FF is the path forward.

Mozilla's continued trend away from Openness and the foundation's original goals and more towards business objectives and corporate 'morals' similar to those seen in google seems like we are just replacing one Corporate master for another with Modern Mozilla, inc

Long gone are the days of the Mozilla Foundation standing up for the users against the Corporate goliath Microsoft, today they simply adopt what ever Google wants them to provided Google calls them 'standard' .

riku_iki(10000) 4 days ago [-]

It's more about stop earning money, because large userbase still use Chrome and Google Search regardless of individual dev preferences.

KirinDave(641) 4 days ago [-]

The real challenge here is that for the most part, AMP gives a better experience to users than publishers are willing to provide. As such, Users click on AMP links and expect good things to happen. Your pitch is, 'stop using Chrome so that publishers have more options for making the web slower and instrumented because it's morally right.'

That is a tough sell. Users do not really care (nor should they) what publishers want to do to make sites profitable. That space is so fraught with abuse that it's not going to get any sympathy from the user side, anyways.

mda(4214) 4 days ago [-]

Do you have any data that Google's services were the actual cause of the slowdown of websites?

mbesto(3156) 4 days ago [-]

> Firefox is awesome. FF dev tools are awesome. FF, like Wu Tang, is for the kids.

FF is awesome, but have they fixed the battery issues on Macs yet? It's a non-starter for me to switch.

sm4rk0(10000) 4 days ago [-]

Marko, bravo for making that clean and readable site (and having 0 trackers, as reported by Firefox Klar)!

There's one issue I noticed when clicking the links to page sections (#anchors). Those lazy-loaded images make the page scroll away from the section title I jumped to. Is it possible to fix that by having images replaced by placeholders of the same size?

markosaric(10000) 4 days ago [-]

Thanks for the kind words and yeah no trackers/third-party calls/cookies etc. Have tested on regular Firefox for mobile and on Firefox Preview too (you should try it if you like Klar) and it works fine on both. I'll see if I can add placeholders when all this traffic slows down. Don't dare to touch much at this time.

RobertRoberts(10000) 4 days ago [-]

I think there needs to be a real alternate solution from Google for longer term change. (if Google wants to stay relevant)

Can they just penalize slow/large file size sites in their index?

This seems like the underlying goal behind AMP and Chrome's Lighthouse (site audit tool) any way.

This would make a lot of sites fast in the next couple months. But maybe Google doesn't really want that?

thejohnconway(10000) 4 days ago [-]

> Can they just penalize slow/large file size sites in their index?

Well, that would be bad for the independent web, wouldn't it? I mean, you'd be penalising sites with large amounts of content, that aren't on a CDN. Large images or videos, for example, might be the entire point of the page in the first place. I don't want to be directed to to a webpage about an artist (for example) that has the crappiest, smallest and fastest loading images, I want the one with the best images.

windsurfer(3559) 4 days ago [-]

AMP's current goals are stated to make pages load faster, but it's obvious that the goal is only a start. It's essentially the second part of Microsoft's old 'Embrace, Extend, and Extinguish' where Google is wanting to extend the third party advertising and content that exists on the web so that they may extinguish it.

Google's mission is to 'organize the world's information'. How can you better organize the world's information when other companies control it? Nothing better than taking control of that information directly. Now you can organize and reformat it to your own wishes. Nothing evil about that, mind you, but it does make you think about their next steps.

donohoe(154) 4 days ago [-]

Yes - they do this.

Slow web pages is a big signal (out of many) in SEO.

wmf(2068) 4 days ago [-]

W3C is working on neutral/non-locked-in replacements for AMP like feature policies and Web Packaging but standards take years to deploy.

throwaway13337(4026) 4 days ago [-]

The reason that they do not just penalize slow loading sites in a big way is because those are the sites that they directly profit on.

Most page weight seems to come from tracking and ads. Google IS the most popular tracking and ad provider on the web.

If google penalized slow loading very much, they'd be hurting their own revenue and data collection. It's a conflict of interest.

AMP allows these big, ad-driven, heavy-weight sites to still take all the top spots on the web. Meanwhile terrific content is buried because it's failure to implement AMP.

Google's self-interested ranking of sites is shaping the web very negatively - towards more tracking and more ads.

chrisan(3418) 4 days ago [-]

Page speed is already a penalty factor in ranking. Think they introduced this last year

edit: https://webmasters.googleblog.com/2018/01/using-page-speed-i...

foxhop(3867) 4 days ago [-]

If you run a NAS at home, you should also setup syncthing. It's replaces Google drive for your phone's camera, seemlessly.

EamonnMR(3898) 3 days ago [-]

I've been looking into doing this, have you got any pointers to setting this up? I'm trying to do it with a raspberry pi and a big external hard drive. Darned thing keeps shutting down on me though, and the pi won't boot with the HDD plugged in. Thanks for reminding me to pick up a powered USB hub...

buboard(3489) 4 days ago [-]

It's funny that, if it was a net neutrality issue, e.g. Comcast pushing a new video format and prioritizing video for their own streaming service, the internet would be a warzone by now. But we all love google. And google used to be all fuzzy and unevil. But now they 're evil

system2(4216) 4 days ago [-]

How can anyone categorize AMP as evil? They are improving and leading. No one needs to use Google, they have alternatives. This is how things evolve. Someone else will create another amp alternative and things will change later. Netflix was #1, then Hulu, Disney, amazon video came. No one lasts forever, everyone needs to evolve and this exactly is what google is doing.

privateSFacct(10000) 4 days ago [-]

For those interested in security - AMP basically forces the iframe javascript sandbox security model.


Even reputable web pages tend to have a metric TON of non-sandboxed javascript from third parties. If you care about your security this is a risk.

If you stick with AMP - this is - by spec - prohibited.

Something to think about as you browse the web gobbling down javascript and all the other third party javascript being pumped at you.

rpmisms(10000) 3 days ago [-]

Now that we finally can run JS in AMP. That's been a big no since we can't run our normal funnel using AMP.

ogre_codes(10000) 4 days ago [-]

Which conveniently ensures that the only way you can effectively monetize AMP articles is via Google's own advertising networks which don't have constraints on running javascript.

JohnFen(10000) 4 days ago [-]

> For those interested in security - AMP basically forces the iframe javascript sandbox security model.

AMP is the wrong way to address this. Using a good browser is the right way.

chrismatheson(10000) 4 days ago [-]

Is the answer, just say no. ?

justincredible(10000) 4 days ago [-]

No, an open web requires integrity and dedication to resist the tide of AdTech.

StacyRoberts(10000) 4 days ago [-]

How it's this not copyright infringement? Can't we fight it on those grounds?

yellowarchangel(10000) 4 days ago [-]

Websites choose to use AMP because it has favourable search results.

soyyo(10000) 4 days ago [-]

I worked on amp for a leading newspaper, and everyone who says that amp is about 'making the web faster on mobile' is either very naive or doing marketing for google.

For publishers, amp is about trying to top the results on google search and capture traffic, it's their only motivation to publish their content using amp, and the only metric they look in order to evaluate the results.

Once they have their amp content, they will look how to load it with ads and tracking, which very conveniently is supported on amp, just as they do in their regular sites.

So the 'fast' part, besides using their CDN, actually comes from limiting what you can do on almost every other part of the site, you can only do the stuff that is packed in the amp components controlled by google, which in practice means that google controls the web behavior.

lern_too_spel(4220) 4 days ago [-]

AMP is fast because it prerenders. That is the main point of AMP, and you seemingly don't understand it after working on it. It amazes me how little developers understand their platforms these days.

827djdjs6(10000) 3 days ago [-]

Those poor publishers, really makes me miss the early 2010s internet.

brentonator(4120) 4 days ago [-]

I invite you to go to your local Fox/ABC/CBS channel's website and TRY to read an article vs AMP.

AMP is about stopping....that. It's indescribable how horrible these companies have become.

1. Auto-playing ads 2. Scroll-jacking 3. Overlay...after overlay... after overlay. 4. Popover 5. Paywall 6. Popover again for good taste. 7. Oops you scrolled too far better redirect you to another page entirely. 8. You wanted the video version of this article right? Better force you to read the article in 20% of the screen so all of our ads, bars, and video can fit on the page.

privateSFacct(10000) 4 days ago [-]

If you work for a media org - than you KNOW that on the non-amp pages the synchronous java script from a hundred tracking / ad platforms (why can't these sites just use ONE tracking library) CRUSH the page load times.

Despite your (false) claim that google support sync javascript ad libraries 'just as they do in their regular sites' this is 100% false.

The amp javascript components have DOM interaction restrictions, file size restrictions, response restrictions, can't run sync etc.

for amp-ad

'No ad network-provided JavaScript is allowed to run inside the AMP document. Instead, the AMP runtime loads an iframe from a different origin (via iframe sandbox) as the AMP document and executes the ad network's JS inside that iframe sandbox.'

If you can't understand why some of these steps result in both a faster site and one that is more secure I can't help you, but please stop with the misinformation here.

mrtksn(3411) 4 days ago [-]

Serious question: what else those websites are supposed to do (that AMP does not provide) with all those megabytes of scripts?

Newspapers display a text and an image and very rarely an interactive content(Election day maps and charts, mostly).

Is there a reason FROM USERS PERSPECTIVE to have different website codebase for each publisher?

For years the Web community kept creating new JavaScript libraries every day and all these web libraries were about providing a different way to do the same thing. No one ever created anything for the users, in fact, AMP is the first web technology that improves the user experience. It's loading fast and not too much stuff happens to display a text and an image.

Web people are mad at Google and I think they should be but all this happens because the web publishers refuse to compete on User Experience. They all optimize for the clickbitiest title or controversial topic and Google came and steamrolled their publishing tech.

I can't really blame Google for this one, you can check it out - I am critical of Google but I am more critical of the news business or the web tech community that optimized for very bad KPI that destroyed democracy, made web unpleasant and are now crying because of someone demolished their low-quality business.

From USERS PERSPECTIVE, AMP is a godsend. You can quickly view and skim low-quality content. The alternative is slowly viewing and skimming low-quality content.

It seems like the web technologists are unaware that they are dealing with real human beings, optimizing blindly for page views and CPMs.

AMP is Youtube for written content. A strealined conent delivery platform prioritizing UX that the publishers failed to create themselves all these years.

bytematic(10000) 4 days ago [-]

Exactly, use google amp and get higher rankings and it can increase your traffic by hundreds to thousands to millions. I hate it but its how it works

noelsusman(10000) 4 days ago [-]

Of course AMP has other motives than simply making the web faster for mobile, but it also does make the web faster for mobile. I guarantee the AMP version of that paper's mobile website provided a significantly better user experience than the normal version. That is why AMP has been successful, and it's why I will continue to click on AMP links whenever available.

Is giving Google that much control ideal? Of course not, but from a user perspective it's a hell of a lot better than the alternative.

emodendroket(10000) 4 days ago [-]

No, I'm well aware that that's what it's 'about,' just like I'm well aware that the search engine is mainly about advertising. I just don't care what it's about so long as the results are beneficial to me -- which they definitely are. When I see the AMP icon I know the page is going to load much faster

s17n(4215) 4 days ago [-]

> So the 'fast' part, besides using their CDN, actually comes from limiting what you can do on almost every other part of the site

That is a good thing.

Historical Discussions: Sundar will be the CEO of both Google and Alphabet (December 03, 2019: 1203 points)
A Letter from Larry and Sergey (December 03, 2019: 7 points)

(1203) Sundar will be the CEO of both Google and Alphabet

1203 points 6 days ago by minimaxir in 95th position

blog.google | Estimated reading time – 7 minutes | comments | anchor

Our very first founders' letter in our 2004 S-1 began:

"Google is not a conventional company. We do not intend to become one. Throughout Google's evolution as a privately held company, we have managed Google differently. We have also emphasized an atmosphere of creativity and challenge, which has helped us provide unbiased, accurate and free access to information for those who rely on us around the world."

We believe those central tenets are still true today. The company is not conventional and continues to make ambitious bets on new technology, especially with our Alphabet structure. Creativity and challenge remain as ever-present as before, if not more so, and are increasingly applied to a variety of fields such as machine learning, energy efficiency and transportation. Nonetheless, Google's core service—providing unbiased, accurate, and free access to information—remains at the heart of the company.

However, since we wrote our first founders' letter, the company has evolved and matured. Within Google, there are all the popular consumer services that followed Search, such as Maps, Photos, and YouTube; a global ecosystem of devices powered by our Android and Chrome platforms, including our own Made by Google devices; Google Cloud, including GCP and G Suite; and of course a base of fundamental technologies around machine learning, cloud computing, and software engineering. It's an honor that billions of people have chosen to make these products central to their lives—this is a trust and responsibility that Google will always work to live up to.

And structurally, the company evolved into Alphabet in 2015. As we said in the Alphabet founding letter in 2015:

"Alphabet is about businesses prospering through strong leaders and independence."

Since we wrote that, hundreds of Phoenix residents are now being driven around in Waymo cars—many without drivers! Wing became the first drone company to make commercial deliveries to consumers in the U.S. And Verily and Calico are doing important work, through a number of great partnerships with other healthcare companies. Some of our "Other Bets" have their own boards with independent members, and outside investors.

Those are just a few examples of technology companies that we have formed within Alphabet, in addition to investment subsidiaries GV and Capital G, which have supported hundreds more. Together with all of Google's services, this forms a colorful tapestry of bets in technology across a range of industries—all with the goal of helping people and tackling major challenges.

Our second founders' letter began:

"Google was born in 1998. If it were a person, it would have started elementary school late last summer (around August 19), and today it would have just about finished the first grade."

Today, in 2019, if the company was a person, it would be a young adult of 21 and it would be time to leave the roost. While it has been a tremendous privilege to be deeply involved in the day-to-day management of the company for so long, we believe it's time to assume the role of proud parents—offering advice and love, but not daily nagging!

With Alphabet now well-established, and Google and the Other Bets operating effectively as independent companies, it's the natural time to simplify our management structure. We've never been ones to hold on to management roles when we think there's a better way to run the company. And Alphabet and Google no longer need two CEOs and a President. Going forward, Sundar will be the CEO of both Google and Alphabet. He will be the executive responsible and accountable for leading Google, and managing Alphabet's investment in our portfolio of Other Bets. We are deeply committed to Google and Alphabet for the long term, and will remain actively involved as Board members, shareholders and co-founders. In addition, we plan to continue talking with Sundar regularly, especially on topics we're passionate about!

Sundar brings humility and a deep passion for technology to our users, partners and our employees every day. He's worked closely with us for 15 years, through the formation of Alphabet, as CEO of Google, and a member of the Alphabet Board of Directors. He shares our confidence in the value of the Alphabet structure, and the ability it provides us to tackle big challenges through technology. There is no one that we have relied on more since Alphabet was founded, and no better person to lead Google and Alphabet into the future.

We are deeply humbled to have seen a small research project develop into a source of knowledge and empowerment for billions—a bet we made as two Stanford students that led to a multitude of other technology bets. We could not have imagined, back in 1998 when we moved our servers from a dorm room to a garage, the journey that would follow.

Sundar sent the following email to Googlers on Tuesday, December 3:

Hi everyone,

When I was visiting Googlers in Tokyo a few weeks ago I talked about how Google has changed over the years. In fact, in my 15+ years with Google, the only constant I've seen is change. This process of continuous evolution -- which the founders often refer to as 'uncomfortably exciting' -- is part of who we are. That statement will feel particularly true today as you read the news Larry and Sergey have just posted to our blog.

The key message Larry and Sergey shared is this:

While it has been a tremendous privilege to be deeply involved in the day-to-day management of the company for so long, we believe it's time to assume the role of proud parents—offering advice and love, but not daily nagging!

With Alphabet now well-established, and Google and the Other Bets operating effectively as independent companies, it's the natural time to simplify our management structure. We've never been ones to hold on to management roles when we think there's a better way to run the company. And Alphabet and Google no longer need two CEOs and a President. Going forward, Sundar will be the CEO of both Google and Alphabet. He will be the executive responsible and accountable for leading Google, and managing Alphabet's investment in our portfolio of Other Bets. We are deeply committed to Google and Alphabet for the long term, and will remain actively involved as Board members, shareholders and co-founders. In addition, we plan to continue talking with Sundar regularly, especially on topics we're passionate about!

I first met Larry and Sergey back in 2004 and have been benefiting from their guidance and insights ever since. The good news is I'll continue to work with them -- although in different roles for them and me. They'll still be around to advise as board members and co-founders.

I want to be clear that this transition won't affect the Alphabet structure or the work we do day to day. I will continue to be very focused on Google and the deep work we're doing to push the boundaries of computing and build a more helpful Google for everyone. At the same time, I'm excited about Alphabet and its long term focus on tackling big challenges through technology.

The founders have given all of us an incredible chance to have an impact on the world. Thanks to them, we have a timeless mission, enduring values, and a culture of collaboration and exploration that makes it exciting to come to work every day. It's a strong foundation on which we will continue to build. Can't wait to see where we go next and look forward to continuing the journey with all of you.

- Sundar

See Alphabet's press release.

All Comments: [-] | anchor

rosybox(10000) 6 days ago [-]

Alphabet and Google being a separate company makes even less sense now.

shadowgovt(10000) 6 days ago [-]

It's a line-item hack, the way that Hollywood studios structure films to be separate sub-corporations with their own fixed budgets.

So if a film crew, say, accidentally blows up a small town somehow, there's a firewall between the assets that were dedicated to making that one film and the entirety of, for example, Sony Pictures net worth and capital.

I'm not sure that firewall is well-tested in American law, but it's a well-used approach and I've always assumed the Google / Alphabet arrangement was for similar reasons (so that worst-case scenario on any 'bet is always 'Alphabet cuts bait and shuts it down, liquidates it, and debtors go after the assets of the 'bet' without risking the performance numbers of the Google cash cow directly).

paxys(10000) 6 days ago [-]

I imagine Sundar is going to go 'spring cleaning' on a bunch of Alphabet companies relatively soon.

jedimastert(4104) 6 days ago [-]

Most of the things under ALphabet don't really make sense to be under Google either.

viburnum(3425) 6 days ago [-]

These guys have all the money and power in the world, and they've used it to turn an excellent service into a sleazy ad machine. They have $50 billion each. What would it have cost them to run Google without making it trashy? Would they have even noticed if they only had $5 billion each?

Skunkleton(10000) 6 days ago [-]

Don't blame them, blame corporate capitalism. This is where all publicly traded american businesses end up.

radiusvector(4077) 6 days ago [-]

Wow, Sundar is truly phenomenal.

andihow(10000) 6 days ago [-]


ciustuc(4210) 6 days ago [-]

'We could not have imagined, back in 1998 when we moved our servers from a dorm room to a garage, the journey that would follow.'

Starkus(10000) 6 days ago [-]

Google's political censorship is outrageous and dangerous.

reubensutton(3974) 6 days ago [-]

What's the relevance of that to this post?

andrewstuart(1000) 6 days ago [-]

Google needs to fix its reputation for killing products.


michaelhoffman(2954) 6 days ago [-]

Google had that reputation wayyyyyy before Sundar was CEO.

shadowgovt(10000) 6 days ago [-]

Honestly, part of Google's approach is to kill products. They're a company with a startup-incubation-emulator running inside them. Things that seem worthwhile get resources, but if a product can't make its way to profitability in some amount of time, it dies.

They should probably be more up-front about the messaging around this, but if you step back and look at their practice, that's how it appears to work. User numbers don't matter; profitability matters. Not unlike startups in the long-run.

drcode(3307) 6 days ago [-]

Yes, I agree that they should remove a layer of management to achieve a more focused vision around their products. (sarcasm)

UncleMeat(10000) 6 days ago [-]

I find it stunning that of all of the communities it seems to be HN that has jumped on this idea the most. Surely people entrenched in the startup world understand the value of pivoting and dropping products that aren't working.

beaner(3820) 6 days ago [-]

One thing I really like about Larry Page is that it's obvious he never really wanted to be a manager. He's a real nerd at heart who likes technology. But he also realized the value of what he created and acted as a good steward for 20 years. He took the reigns when he needed to, and let others take over when the time was right. He's still one of the people I look up to most.

smt88(4211) 6 days ago [-]

> He took the reigns when he needed to

I strongly disagree. Alphabet is an unfocused company which seems to treat product releases as experiments that can be killed at any time without warning. There's a culture that incentivizes more experiments rather than better products.

The aimlessness is obvious in the frequent and baffling product renaming, reorganizations (like Nest joining Google), duplicated/overlapping products, and killing of acquired products.

A smart CEO knows how to focus, build on early success (rather than abandon), and tell a coherent brand story.

JohnFen(10000) 6 days ago [-]

> He's still one of the people I look up to most.

I don't. He's allowed Google to become many of the things he's long stated he was opposed to. Either he never had his stated values in the first place, or he's turned his back on them in later years.

draw_down(10000) 6 days ago [-]

I certainly can't blame him for that. I mostly wonder why he felt obligated to hang on as long as he did.

dctoedt(393) 6 days ago [-]

> he took the reigns

Friendly amendment: reins

summerlight(10000) 6 days ago [-]

There are some ongoing theory-crafting in this thread, but the real reason seems pretty simple; Larry and Sergey obviously don't want to deal with all the management and operational stuffs, but only 'moonshots' like autonomous vehicles or quantum computing. Yeah, this was the whole purpose of establishing a holding company, but Alphabet has grown by 2 times (both in employee count and revenue) and probably they're now facing the similar amount of bureaucratic workload again.

Personally, I think Sundar has been a pretty good CEO and probably a better businessman for this size of organization but I'm still not sure whether this leadership change will work better for Google though. Due to Alphabet's structure, Larry and Sergey will keep majority voting stocks but they will be away from most of the details in the company. Can they still make good business decisions without such details?

seppel(10000) 6 days ago [-]

My personal prediction is that Sundar will be to Google what Ballmer was to Microsoft: He will make tons of money but will drive Google into a corner where it will be disrespected by, well, hackers (That is, people who are interested in technically open solutions, configurability, fitness of unplanned purposes, etc). But is it refreshing that Microsoft now is going back into an opposite direction.

blisterpeanuts(4092) 6 days ago [-]

Sundar made a mistake in wading into politics after the 2016 elections. A good business leader knows to stay out of the forbidden topics of politics and religion; unite the troops, don't divide them. I don't think he realized how much impact his words would have on conservatives once the TGIF video got out. Anyway let's hope they learned something about remaining neutral in public, as the company 'grows up'.

prepend(10000) 6 days ago [-]

> I think Sundar has been a pretty good CEO and probably a better businessman

Has he though? I get the sense that he's really Balmering it up in that he inherited a super successful company, and didn't do much with it except not screw up.

The two big growth areas- social and cloud- are dead or a distant and growing third.

Google hasn't done much new or exciting through his whole term. So no new products, the 2016 election and congressional testimony debacle, randomly firing different people with no sound reasoning.

It would be neat if someone could establish a vision beyond "10% growth forever through lots of rent seeking." Maybe a company has to go through their Ballmer to get to their Nadella.

Maybe they can somehow convince Jeff Dean to be CEO and just write an AI that generally maximizes profit.

stjohnswarts(10000) 6 days ago [-]

Google has never lived up to the 'do no evil' motto but under Sundar things have only gotten much worse with no signs of getting better at any point in the future.

bb88(2502) 6 days ago [-]

I'm not sure that Sundar has been a good CEO. The constant shutdown of google services that people rely on has been troubling. The management troubles over employees lately has been troubling as well. They still own the search market, but would you worry about the lifespan of the google products you're thinking of buying?

I still have a Sony Google TV, it works great as a TV but the android OS is no longer supported, so there are no more OTA updates. It's a shame because it seems like if they had thought about it they might have wondered what happens when they no longer support the product.

There was a time when Google questioned the value of managers and managers had to prove their value to the engineers. Maybe it's time to question their value again.

drcode(3307) 6 days ago [-]

The fact that a layer of management can be eliminated with everyone remaining on amicable terms seems like almost uniformly positive news for google.

orky56(3457) 6 days ago [-]

How do you know everyone is on amicable terms? A press release is definitely not the place to determine the reality of the situation.

lacker(1694) 6 days ago [-]

I think this is good news. Google has sometimes felt like it was pulled in two different directions - making the core business successful, and providing spinoff cash for the crazy projects. Having the same person run both of them seems like it will make management more efficient.

It probably means that a number of Larry and Sergey 'pet projects' will become deprioritized. Good. Self-driving cars seem like they have great potential. I think that technology has the potential to make the entire Alphabet structure worthwhile. But it also seems like it should no longer need the Alphabet structure to protect itself.

avocado4(4091) 6 days ago [-]

I think a lot of 'bets' are going to be canned after this. A lot of them were just toys for Larry & Sergey.

matthewfcarlson(10000) 6 days ago [-]

What are some Larry and Sergey pet project? Genuinely curious.

sys_64738(3735) 6 days ago [-]

An ad company has a new CEO. Why is this news?

lucb1e(2135) 6 days ago [-]

I'm also not sure what the news is. The (in 2014) new CEO of Microsoft apparently got rid of an entire management layer, that seems like news. Without an intention like that being announced, this is celeb news for nerds and really only tells me it'll be new face same story -- at least until we know more.

mmmeff(10000) 6 days ago [-]

Okay Google.

Avamander(10000) 6 days ago [-]

Funnily (or sadly) enough, saying 'Ok boomer' triggers the hotword detection.

lifeisstillgood(1547) 6 days ago [-]

Weirdly this feels like a non-issue. Google has not felt like it has a 'personality' for some time - maybe it's a function of hitting mega-corporate size, but it also feels a bit like when Microsoft (a Computer on every Desktop) essentially achieved its goal, it then spent a decade in 'goalless and soulless exploitation' mode - something that one suspects is the next step for the worlds largest personality-free platform.

If the two of them leave (have fun sipping pina coladas on the beach!) I am not sure (from the outside) what difference will be made. This may sound like great corporate succession planning - but I feel without a goal there will be little to stop business plans that boil down to 'squeeze every dollar from everyone everywhere'

(Was there a glimmer of light in 'unbiased free information to all' - is that a mission for the new decade?)

Edit: Just to emphasise - I hope they have fun spending their billions.

gerdesj(10000) 6 days ago [-]

'Weirdly this feels like a non-issue'

It is.

That page is awful with a hood that keeps flapping around up and down and text that is trying to be true to italics for quotes instead of lots of diacritics and ends up looking badly diseased. Then the letter sidesteps into a memo, which is equally odd and awkward. It's all a bit odd.

S and B (in my very opinionated ... opinion) did create a great thing in Google. I can't fault people trying to make a living and running with the ball to the point where the playing field is not just paved with gold but it nearly redefines what the concept of gold is.

I think they should have retired before 'do no evil' was ditched. That would have cemented their status as internet demi-gods. Instead I think their legacy will be

<i>wierdos whot spy on you</i>

lonelappde(10000) 6 days ago [-]

'unbiased free information' was never a goal because it's impossible.

tanilama(10000) 5 days ago [-]

Sundar lacks that maverick charm that founders of legendary company usually have.

He feels ... reserved and safe.

Maybe that is what Google needs at this moment

joshspankit(10000) 6 days ago [-]

What ever happened to digitizing all of the world's information?

That still feels like at least a 100yr mission.

hnzix(4130) 6 days ago [-]

> is that a mission for the new decade?

It would be nice to see a focused mission from El Goog. From the outside it feels like they incubate a bunch of random semi-competing products which are arbitrarily terminated or boosted. There's a lot of talent that could be marshaled.

jansho(2729) 5 days ago [-]

> Edit: Just to emphasise - I hope they have fun spending their billions.

Meh (time for my socialist side to come out) they could try and do the Bill Gates Foundation thing. Don't be cynical — it is making some much needed positive change!

Money = Power = Responsibility (with wisdom).

Exponentially true if you are a super billionaire.

jillesvangurp(10000) 5 days ago [-]

I share the sentiment that Sundar to me feels a bit like an invisible placeholder for leadership that I would argue is obviously lacking in Google lately.

There's nothing wrong with its stock price and revenue and I'm sure from that angle a lot of financial people are pretty happy with the status quo of just milking that cow perpetually. However, looking at it from the technical angle, I see a company that is asleep at the wheel and showing a distinct lack of vision, leadership, and direction across the board of its product portfolio. Perhaps a bit like MS under Balmer ramming out increasingly less popular iterations of windows and office. MS turned things around under Nadella. Google perhaps hasn't sunken far enough that it needs that kind of leadership change yet but it seems in my eyes to be going down that same path slowly.

IMHO all the money making units worth mentioning in Google have their origin in a brief period of the early 2000s perhaps up to the 2006-2008 time frame (i.e. the Android launch) when it was smaller, more creative, nimble, and definitely more capable of translating vision into execution. That includes things like google docs, hangouts, maps, photos, youtube, gmail, android, chrome, google cloud and of course the big money maker ads.

A lot of other stuff launched in the years since has simply failed to get traction or got killed early. This has actually become a meme on HN and elsewhere where people openly wonder when they will kill X at the moment of the announcement where X is a long list of stuff Google tried and failed to deliver or just walked away from despite internal and external enthusiasm (e.g Google Inbox). The list of stuff that they announced in the last decade or so that actually didn't get killed is worryingly short.

There actually is very little of significance that I can name that emerged out of Google in the recent decade that is worthy of being added to that list and only some stuff under the Alphabet umbrella that comes close (i.e. Waymo would be the main success story there that has yet to prove itself as a long term money maker).

In other words, I think of Sundar as a caretaker, not a leader. He's greasing the wheels of the money printing machine that is ads but maybe not really the best for coming up with the next big thing. Maybe now is a good moment to start looking for a real leader to replace the founders that clearly just announced their permanent retirement from the industry and any other meaningful involvement with tech (tu un-sugar coat this announcement).

TaupeRanger(10000) 6 days ago [-]

What makes you think they're retiring to sit on their asses all day and 'spend their billions'? Maybe they have new and interesting things to do.

basch(4222) 6 days ago [-]

Ironically, they could find some focus by no longer being a one size fits all search engine, and instead offer different search products for different types of users, or focused on different types of information. They do to an extent with travel data, financial data, scholar, but their product just isnt great at crawling the web anymore.

They, like Apple and Microsoft, also need some consumer experience advocates, who take a step back and ask how the consumers entire experience is across the whole suite. All of them have products that are often less than the sum of their parts.

aazaa(4112) 6 days ago [-]

Buried in a paragraph 2/3 of the way down:

> ... Going forward, Sundar will be the CEO of both Google and Alphabet. ...

Not knowing the internals of Google, it seems as if this is the announcement that Page and Brin are stepping down. Is this correct?

If so, what an incredibly subtle way to announce a high-profile pair of resignations.

rory096(3733) 6 days ago [-]

This paragraph seems pretty straightforward:

>Today, in 2019, if the company was a person, it would be a young adult of 21 and it would be time to leave the roost. While it has been a tremendous privilege to be deeply involved in the day-to-day management of the company for so long, we believe it's time to assume the role of proud parents—offering advice and love, but not daily nagging!

anonytrary(4070) 5 days ago [-]

It's not subtle at all. Anyone who knows the current state of affairs (i.e. Sundar being CEO of Google) will have deduced this based on the myriad of titles on this topic alone.

lacker(1694) 6 days ago [-]

Yes, this is the announcement that Page and Brin are stepping down.

I wouldn't really call it 'incredibly subtle' - most similar announcements are wrapped in a bunch of corporate language as well. This is a pretty standard way to announce this sort of thing.

raldi(584) 6 days ago [-]

Nah, anyone used to reading Google press releases knows that a subject line of, 'An Update on X' always means X is being shut down, and thus, 'A letter from Larry and Sergey' could mean nothing else except they're stepping down.

seppel(10000) 6 days ago [-]

The funny thing is: If you are used to coporate speech, then immediately after you read the headline 'A letter from Larry and Sergey' you know that there is a resignation coming up.

mr_woozy(10000) 5 days ago [-]

I hate google and you should too.

dang(179) 4 days ago [-]

Please don't post unsubstantive comments to Hacker News.

ixtli(3925) 6 days ago [-]

> Nonetheless, Google's core service—providing unbiased, accurate, and free access to information—remains at the heart of the company.

How is it ok to just say clear lies like this?! Without making any value judgements at all this is blatantly false. The ranking algorithms specifically encode a bias into search results. This is actually explicitly what users want, too! To be clear this is distressing to me because very powerful people can throw around words uncritically in this way for niche political points without being challenged.

EDIT: It seems people are taking this comment in a way I didn't intend for it to be taken. Another example is that they are applying biases by sorting and displaying results on news.google.com. Personally I really like how this is done, but I think we need to be honest that it is a bias so that we can move on to a more productive conversation about what is or isn't a good bias.

pknopf(4188) 6 days ago [-]

The difference is between personal bias and relevant search results.

He meant to say 'personal bias.'

anonytrary(4070) 5 days ago [-]

I think people are missing your point. As we all know, removing unbiased and false information is an unsolved problem, so I find it a bit odd that they claim to be able to do this. Not only that, but now they will be held accountable in this regard. I can't see how this statement made it through PR.

savanaly(10000) 6 days ago [-]

Don't be too pedantic. By your definition, what even is unbiased? A list of all the sites on the web, A to Z? Words are what we make of them, and there's clearly some distinction between Google and, say, Baidu that is entirely appropriate to capture with the word 'unbiased'.

bpicolo(10000) 6 days ago [-]

> I know there are some polls out there saying this man has a 32% approval rating. But guys like us, we don't pay attention to the polls. We know that polls are just a collection of statistics that reflect what people are thinking in 'reality.' And reality has a well-known liberal bias

senthil_rajasek(2293) 6 days ago [-]

I cannot tell if you are being serious or you are brainwashed.

My understanding of Google's history is their algorithms started as providing answers that the world already knew but could not find easily.

Over time they had to handle multiple lawsuits, harassments by entrenched groups that were upended by opening up of knowledge.

These adjustments / reactions made the algorithm look biased.

This disputed quote by Cardinal Richelieu comes to mind when I think of people claiming Google is a bad actor.

'If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him'

cutenewt(10000) 6 days ago [-]

John Legere is stepping down from T-Mobile.

Perhaps he might be asked to step in for Sundar if Sundar can't turn it around in the next 18 months.

sidcool(227) 5 days ago [-]

Google should buy a hardware company and a carrier company to have a big impact on hardware business

nvrspyx(10000) 6 days ago [-]

I see a few comments praising Sundar. As someone who doesn't follow closely, but feels that Google has deteriorated dramatically under his watch, does someone mind explaining the reasoning for said praise?

bogwog(10000) 6 days ago [-]

He's a good businessman, but that 'deterioration' that you (and me and many many others) see is because he's not driven by the same morals and beliefs that Larry and Sergey had when they were running the company. The 'old Google' everyone loved was the pre-Sundar Google.

I can't really blame Larry and Sergey for wanting to do something else, and leaving the company in the hands of someone that they know will maximize profits. But that doesn't change the fact that modern Google does a lot of scummy and downright evil stuff today that likely wouldn't have happened during the 'old Google' days.

jpm_sd(3029) 6 days ago [-]

He's Ballmer-ing steadily forward, that's all that is really needed at this point.

tootie(10000) 6 days ago [-]

They've done very well from a product perspective as well as dollars and cents. And I think he hasn't absorbed much blame for the culture problems. At least not yet.

nostrademons(1712) 6 days ago [-]

He's a Type-2 CEO [1], a Ballmer, Sculley, or Cook figure. These types of personalities are generally incapable of innovating, but they're very good at maximizing the growth, efficiency, and profit potential of existing product lines. Wall Street loves them because they lead to higher EPS; the general public generally considers them mediocre because they don't lead to awesome new toys to play with.

Personally I thought Sundar was a very good manager (much better than Larry, who honestly sucked as a CEO) but a poor innovator. As a HN commenter you're probably looking for an innovator. The average user (who just wants Google to stay up) and the average investor (who just want to make more money) are quite happy having a good manager who can optimize operations, though.

[1] https://a16z.com/2010/12/16/ones-and-twos/

Ididntdothis(4192) 6 days ago [-]

Google has matured into a regular big corporation. By that standard they are doing well with revenues and earnings growing.

lacker(1694) 6 days ago [-]

Google's market cap has approximately doubled while Sundar has been the CEO. That's the simplest reason.

Personally I think a lot of their products have gotten better recently, like the experience of using voice-controlled Google Maps in my car, and Google Cloud is superior to AWS in many ways, but the company really offers so many different product lines that it's hard for personal experience to be a great representation.

whoisjuan(4105) 6 days ago [-]

Honestly, I'm with you on this one. Sundar seems like a good maintainer of the status quo and that's probably wise since the business keeps growing (growing revenue, growing market cap and growing free cash flow), but Google's evolution on cloud infrastructure/services and hardware are honestly kind of lame.

I mentioned those two because they seem to be their big bets now.

When compared to Satya who is thriving with Microsoft, Sundar looks mediocre at best. I don't mean that in a bad way. Perhaps Google doesn't need to have those tectonic changes to produce value. In that case, Sundar's reserved and low-key profile makes more sense, but don't expect a high profile, exuberant and innovation-driven leadership. I think it's clear he is not that type of leader.

cvhashim(10000) 6 days ago [-]

They've maintained a strong market performance

president(3836) 6 days ago [-]

It seems most people are probably praising him for bringing up the Google stock price, whereas people who actually care about products/culture/technology realize he hasn't actually contributed anything worthwhile. Yet another classic case of a CEO riding out momentum set forth by predecessors and maximizing value to fiduciaries while selling out the user base.

bedhead(3771) 6 days ago [-]

It is clear that Sergey and Larry (and Eric) have been asleep at the switch in shaping Google's culture, which now looks like the campus of Evergreen State College. Both he and Sergey have seemed checked out for years. Being the CEO of this company, especially given everything they've accomplished, doesn't seem very fun. I couldn't tell you what Sergey's been doing there or if he even works at all. This company's culture is trending towards being toxic but that they invented the world's greatest money-making machine has been able to paper over a lot of the rot.

tomrod(2257) 6 days ago [-]

I am out of the loop. What does this mean?

> It is clear that Sergey and Larry (and Eric) have been asleep at the switch in shaping Google's culture, which now looks like the campus of Evergreen State College.

throwwwawayyy(10000) 6 days ago [-]

Posting this as throwaway because, well, obvious.

The heat is on Google and its founders. It goes beyond anything that's in the press right now. The real question is if regulators and others will allow Sundar to take the heat for everything and keep Larry and Sergey from the "hassle" of having to testify about anti-competitive practices and a whole range of other issues. And then there are the "hobbies" of the early Google crew and founders which are a problem in the #metoo era...

There is way more to this than a retirement goodbye.

Rapzid(4193) 6 days ago [-]

When you say 'hobbies' are you alluding to sex tourism and Casa De Epstein type stuff?

0x8BADF00D(10000) 6 days ago [-]

Yeah it also seemed like an Irish exit to me. Especially with all the scrutiny over data collection policies and other political issues happening internally.

pier25(3433) 6 days ago [-]

What are those 'hobbies' you talk about?

dfcagency(10000) 6 days ago [-]

Upvoting you because I've heard similar stories from Burning Man regarding Google founders, and in particular Eric Schmidt.

leowoo91(4179) 6 days ago [-]

TIL that is a valid url: http://google/

messo(10000) 6 days ago [-]

Does not work here. Maybe it is your DNS that auto-corrects / redirects the URL?

Rapzid(4193) 6 days ago [-]

The whole thing reads like it was written by a PR person.

TurkishPoptart(10000) 6 days ago [-]

Well...this is a corporate announcement, so it was likely at least approved by a PR/legal blob.

xwowsersx(3609) 6 days ago [-]

Because it was lol.

el_cujo(10000) 6 days ago [-]

This is exactly the type of thing you have PR people on staff to write though?

arrrg(4111) 6 days ago [-]

It is literally the job of PR people to write things like these. I'm confused by your statement. This is effectively a press release, of course it was written by a PR person.

rvz(4080) 6 days ago [-]

Well it isn't everyday you hear the original CEOs of FAANG companies stepping down these days.

First you heard Gates, then Jobs (Twice) and now Page and Brin. We'll see if whether if Pichai can continue this now that he has been crowned King of Alphabet and Google.

smlacy(3931) 6 days ago [-]

I wonder if this is actually: 'Larry and Sergey resign over cancellation of TGIF'?

kccqzy(3070) 5 days ago [-]

Seems unlikely. Larry Sergey previously attended all TGIFs. Then after a series of pointed questions at TGIF, they no longer decided to attend TGIFs.

ocdtrekkie(2741) 6 days ago [-]

Incredible way to paper over the likely source: https://www.cnbc.com/2019/11/06/alphabet-board-investigating...

Sergey Brin and Larry Page were terrible to women. They engaged in, promoted, and paid for sexual harassment. Shortly after the Alphabet board is investigating misconduct, Larry and Page are 'retiring'. And of course, they're still worth billions.

telotortium(1714) 6 days ago [-]

This news story is talking about an investigation of David Drummond. I'm not going to rule out harassment claims against Page and Brin, but do you have a better source?

cromwellian(3993) 6 days ago [-]

This article says David Drummond was involved in an extramarital affair with an employee, it doesn't say that Page or Brin are being investigated for sexual misconduct.

Where's your evidence that they're stepping down over some personal scandal? This seems like a rather bold claim.

ummonk(4170) 6 days ago [-]

I recommend changing the title to "Sundar will be the CEO of both Google and Alphabet" (which I've quoted from the article).

dang(179) 6 days ago [-]

Ok, changed.

Wonnk13(3920) 6 days ago [-]

This has to be a green light for TK to take Cloud and really run with it.

There's no way Sundar can give as much attention to Google and all the other bets simultaneously. Curious what side projects of Larry and Sergey get turned down now.

joemi(10000) 6 days ago [-]

Who or what is TK? It's proving hard to google decisively for a relevant meaning.

suyash(3616) 6 days ago [-]

However neither TK nor Sundar are of Satya Nadella's caliber. Let's see how things turn out in few years with this change at Google.

emrehan(4150) 6 days ago [-]

Dear Larry and Sergey,

Thank you for creating Google.

Nobody could prove that without it the web would have been a better place. You couldn't have known what's to come when you were raising hundreds of thousands of dollars as two students in 1998. Maybe your leadership has been one of the least evil among the possibilities in the evil world we live in.

However, we know what is it like to lose control of your company, and how it could inspire dystopian novels under your leadership now. There're many lessons for all the entrepreneurs to take from your story.

I sincerely hope that you would prioritize the greater benefit rather than the Google's benefit as years pass.

Sincerely, A Non-Googler, one of the 7.7 billion

emrehan(4150) 5 days ago [-]

Since I can't edit this post, I've written my thoughts more clearly in a Twitter thread: https://twitter.com/hantuzn/status/1202179388571340805

xtracto(10000) 6 days ago [-]

That is something that I kind of admire about Bill Gates: As Microsoft's boss he was ruthless, and even anti-competitive, all to benefit his own company.

But somehow, he is one of the 'best type of person' that could become the richest man in our world. There are so many rich people that just look to be buried with their millions or pass it to their family.

Hopefully Sergei and Larry will try to get the same type of legacy.

claudeganon(10000) 6 days ago [-]

Seems like "interesting" timing given that the Alphabet board just recently stated they're investigating the handling of sexual misconduct by executives:


And relatedly:

> Charlie Ayers: Sergey's the Google playboy. He was known for getting his fingers caught in the cookie jar with employees that worked for the company in the masseuse room. He got around.

Heather Cairns: And we didn't have locks, so you can't help it if you walk in on people if there's no lock. Remember, we're a bunch of twentysomethings except for me—ancient at 35, so there's some hormones and they're raging.

Charlie Ayers: H.R. told me that Sergey's response to it was, "Why not? They're my employees." But you don't have employees for fucking! That's not what the job is.


dgacmu(3465) 6 days ago [-]

It's hard to have the moral authority on that with Sergey still running things: https://www.bizjournals.com/sanjose/news/2017/11/30/google-e...

If your theory is correct, we should see Drummond retire soon as well. I expect it...

lonelappde(10000) 6 days ago [-]

That's years old news.

luxuryballs(4217) 6 days ago [-]

I was thinking "interesting timing" given we are coming up on 2020 elections and Twitter CEO Jack announced he would be living in Africa for a portion of the year... they are washing their hands of whatever shenanigans that Google and Twitter will be up to during this election cycle.

airnomad(3334) 5 days ago [-]

Problem with google is that they are losing data game.

They own the web but meanwhile web became only one medium among many and a lot of companies build their own 'web'

Instagram app is essentially a browser to use IG's private 'web' of content google has no access to.

Same goes for twitter, even Snapchat.

So attention wise, google is loosing dominance as other companies are building their own protocols on top of web.

Speaking of data,they own search intent (via Google) and web content (via Google analytics).

However I'd say Facebook is ingesting way more data on non-content usage. They made much easier for advertisers to connect into FBs audience graphs and cross reference and retarget based on first-party visitor profiles.

hobofan(4221) 5 days ago [-]

If you go with that kind of thinking, then I would say that they also mostly own location data. I'm more and more regularly seeing ads on my Google Maps, so it seems they are gearing up on monetizing that too.

Even with being locked out of that silos, I wouldn't be too worried about Google, as it seems that it barely hurt their income streams.

xz0r(4169) 5 days ago [-]

> Instagram app is essentially a browser to use IG's private 'web' of content google has no access to.

Search engines are meant to index the open web. IG's public profiles are still being indexed by Google. Google is winning the data game from an ad-tech point of view but not loosing it.

footpath(2205) 6 days ago [-]


Slightly more organized info in the intro bullets.

Larry Page and Sergey Brin, the CEO and President, respectively, of Alphabet, have decided to leave these roles. They will continue their involvement as co-founders, shareholders and members of Alphabet's Board of Directors.

endorphone(3019) 5 days ago [-]

In a way this seems like an admission of the failure of the 'Alphabet' thing. The idea behind that originally was that all of these other projects were going to become so significant that it wouldn't make sense to have them or their management coupled with Google.

But a half-decade later, it's still 99.9% Google, so just double-up the Google guy to lead both tiers. Same as it ever was.

nostromo(3167) 6 days ago [-]

I'm curious what their continued involvement will look like.

For a while after Bill Gates stepped down as CEO, there was this awkward tension where Steve Balmer was CEO, but people still treated Bill like he was the one in control -- because he was.

tjmc(4194) 5 days ago [-]

So who takes the role of President now? Neither the parent article or the one you cited makes that clear.

jmmcd(10000) 6 days ago [-]

Please please please, just keep telling Sundar and everyone, 'don't be evil'.

drran(10000) 6 days ago [-]


be evil

1000units(10000) 6 days ago [-]

In a world where 'telling' people is enough, there is no evil.

skratchpixels(10000) 6 days ago [-]


DoofusOfDeath(10000) 6 days ago [-]

I like the idea of the 'don't be evil' rule, but it always seemed overly naive / simplistic to me.

Even without factoring in self-justifying rationalizations, people can have significantly different ideas of what counts as evil.

lawrenceyan(1136) 6 days ago [-]

A well deserved congratulations to Sundar's promotion!

Larry and Sergey have spent the majority of their time on Other Bets for quite some time now, and it's good to see that this is finally being solidified / recognized within Alphabet's management structure.

H8crilA(10000) 6 days ago [-]

There's really no need for corpspeak here.

option(4072) 6 days ago [-]

Can someone please explain why it is well deserved? For example, when I was at MSFT it was very clear why Satya deserved to be the next CEO - for leading and growing the cloud business from irrelevance to a major pillar of MSFT. What about Sundar? Is this all because he led Chrome?

pg_is_a_butt(10000) 6 days ago [-]

I thought the entire point of Alphabet was the 'Other Bets' than Google.

So now the CEO of 'Other Bets' is also the CEO of the main bet... so what's even the point of separating the companies other than the tax dodge? oh, right.

sbuccini(3738) 6 days ago [-]

Congrats to Sundar. Well deserved and I wish him the best of luck in his new role.

andihow(10000) 6 days ago [-]


ggm(4052) 6 days ago [-]

Absolutely nothing about the structural discontent emerging in staff.

And nothing about the increasing sense of a loss of claimed ethical stance. I stress claimed, because the lack of concern at the top at its obvious demise makes it less likely it was actually held as a core belief.

rbavocadotree(10000) 6 days ago [-]

Huh? Why would you expect any of that in this letter?

lacker(1694) 6 days ago [-]

As a former Googler who stays in touch, there doesn't seem to be structural discontent emerging in staff. There were a few news stories about people complaining, but Google employs about 100,000 people, so any news story involving less than 1000 of them being unhappy can't really count as 'structural discontent'.

ardit33(3258) 6 days ago [-]

Unless it there is more behind the scenes, the letter is basically saying:

We don't want to deal, or like/enjoy dealing with this pesky employee stuff. We don't have the time, energy, or enjoy it, and they'd rather do something else with their time and money.

They are in a position where they either crack-down on their culture to more of a corp like, and appear to go full evil, or be even more lenient, and risk small 'intolerant' groups or activists taking over and creating disruptions to the business. Whatever they do at this point, they will be either painted as the bad guys in the media (if they go full on evil corp); or the 'dysfunctional' company, if they allow even more discontent and become more 'college/academic like'.

Basically, their employees situation is becoming such a PITA for them, they'd rather not deal with it and quit the company and do something else, more interesting, instead....

They realize that they just don't enjoy dealing with the creature/organization that they created.

Basically, it is the CEO's version of: 'it is not you, but it is me' line of break-up, and we all know what that line means.

mikl(3691) 6 days ago [-]

Kinda make that whole restructuring dance they did when creating Alphabet pointless. The CEO of Google is once again in charge of all the things.

anonytrary(4070) 5 days ago [-]

I'm sure the legal implications of this so-called 'restructuring dance' remain and are certainly not rendered pointless because of these title changes.

(1184) Verizon/Yahoo Blocking Attempts to Archive Yahoo Groups – Deletion: Dec. 14

1184 points about 19 hours ago by Diagon in 10000th position

modsandmembersblog.wordpress.com | Estimated reading time – 13 minutes | comments | anchor

Well, as of sometime on December 5th a huge number of the archivists that were scrambling to rescue archives from Yahoo Groups, had their {e-mail addresses} apparently banned so they can no longer rescue the archives anymore of the groups they had set up operations to do so.

So that means that for me anyway, Verizon has lost all benefit of the doubt, and is likely at least aware of what Yahoo is doing to groups, or is at worst, complicit.

We are receiving comments and messages from the frustrated and angry groups archivists, and some of those are posted below. You can send in your own if you e-mail it to [email protected] . I won't put up threatening, nasty or vitriolic messages however, so you can be angry, but be angry in a controlled way that makes us better than them.

From some of our archivists, dated Wednesday, December 4th:

Just a quick (and sad) update...

The Archive Team (who is working with to save content to upload to the Internet Archive) was again blocked by Yahoo The block is wiping out the past month of work done by hundreds of volunteers.

This info was reported on their IRC channel.

Yahoo banned all the email addresses that the Archive Team volunteers had been using to join Yahoo Groups in order to download data. Verizon has also made it impossible for the Archive Team to continue using semi-automated scripts to join Yahoo Groups – which means each group must be re-joined one by one, an impossible task (redo the work of the past 4 weeks over the next 10 days)

On top of that, something Yahoo did has killed the last third party tool that users and owners have been using to access their messages, photos and files. (PGOlffine).. Note: not everyone who paid for the PGOffline license is being impacted by the problem. but the developer does not have a workaround. Here is their post about it.

Yahoo's own data tools do not provide Group Photos and, as in my case, for two IDs I keep getting the data from another Yahoo account.

The Internet Archive/Archive Team Faces 80% Loss of Data Due To Verizon Blocking

@betamax meedee+: we've lost the access to the vast majority of the groups we joined [because Verizon blocked access to our accounts] .... the effect is that some percentage.....of the signed up groups can no longer be fetched from ...."

They are working to get a final number but the Archive Team estimates that is a 80% loss of the Groups they and their volunteers spent the last month joining in preparation for archiving.

For our community, this is a 100% loss of the Groups we submitted to the Archive Team for? archiving?(30,000).

Morgan Dawn

Hello Brenda,

Thanks for reaching out. It would be really great if you could both forward it to Verizon and publish it on the blog. We had a lot of 'external' (non Archive Team) volunteers helping out, with the expectation that the groups they were joining would be saved, and it is important to communicate to them that Yahoo have basically destroyed most of our progress and work will now need to begin from scratch. I want to avoid a situation where someone comes along in six months time, asking for a group they expected to be saved because they joined it, and having to tell them we didn't manage to save it.

It would also be good to communicate to Yahoo our disappointment that they decided to block our archival efforts without opening a dialogue with us. We've always said we'd be happy to work with Yahoo to archive Groups in a way that minimizes disruption to their services. Realistically the only way we're going to get anywhere near the number of groups we had joined prior to their mass-banning of our accounts is with an extension to the deletion date, which I know you've been pushing strongly for. (The 'best' solution would be for Verizon to un-ban our accounts, but I doubt that is going to happen).

Many thanks for your fantastic work so far in keeping the spotlight and pressure on Yahoo / Verizon.

Kind regards,

Andrew The Archive Team

Report this ad


Today I received another response from Verizon:

Hi Brenda:

I hope to address your concerns and add some clarity on the issues you're referencing.

Regarding the 128 people that joined Yahoo Groups with the goal to archive them – are those people from Archiveteam.org? If so, their actions violated our Terms of Service. Because of this violation, we are unable reauthorize them. Also, moderators of Groups can ban users if they violate their Groups' terms, so previously banned members will be unable to download content from that Group. If you can send the user information, we can investigate the cause of lack of access.

The Groups Download Manager will download any content an individual posted to Yahoo Groups. However, it will not download attachments and photos uploaded to the Group by other members. For those that are having difficulty with the files delivered, this help article explains the types of files within the .zip file sent and how to find third party applications that open them. This is the only way that we can deliver data to our users.

While users will no longer be able to post or upload content to the site, the email functionality exists. If you are having issues with this feature, please reach out to [email protected] and we will work to fix the problem with any delay.

I understand your usage of groups is different from the majority of our users, and we understand your frustration. However, the resources needed to maintain historical content from Yahoo Groups pages is cost-prohibitive, as they're largely unused.

Regarding your concern around timeline, on 10/16, we posted this help page and began sending emails to Groups users explaining the changes to come, including the 12/14 deadline for download request.

On 11/12 and 11/19, we confirmed in email to you that so long as a request to download was put in by December 14th, we will ensure your download is complete before any deletion. This is the case for all Groups users and step-by-step instructions are being sent now to users to support them through the process.

We recognize this transition may be difficult, and we'd like to provide as much assistance and clarity as we can.


(name omitted).

Report this ad

From there, I sent two replies — one when I was angrier, and one when I calmed down

To: (name omitted)

VERIZON/YAHOO does NOT understand our frustration.




YAHOO has mistreated us and abused us all for 6 years, and it's all coming out this weekend. VERIZON owns Yahoo now, and is the one who should clean up their mess.

If we are allowed to have our archives from Verizon/Yahoo, then we should be able to get them ourselves any way we wish. All Verizon plans to do is throw it away. The stuff you send us is messed up, broken, incomplete, and virus ridden. It is UNACCEPTABLE as a solution.

The 128 people you banned were REQUESTED by the group owners to get their stuff.

Verizon refuses to give us more time to get it. We can't do it in 7 days.

So, we will continue to press public opinion about this issue, in every arena that we can with all that we have to make this deletion stop and give us more time, and work something out with Verizon to get our stuff, which is costly to store, and go away.

Report this ad

Bottom line. Give us our archives and we'll leave and never come back.

Thank you sincerely,


......And this is the second e-mail I sent when I calmed down:

Good evening (name omitted)

Please relay this to Verizon/Yahoo.

This is the unfairness that Verizon has shown to Yahoo Groups Users.

First of all, the initial letter said that Verizon had researched the use of Yahoo Groups. This is a complete falsehood. You know how I know it is? Because if you had truly researched the use of Yahoo Groups, you'd have found the 30,000 active groups that are used often, many on a daily basis, despite them being broken in many areas. But Verizon did not do that. If they had, they would have found:

A police cooperative in Washington DC that was using them as a network to communicate with their respective neighborhoods with over 17,000 members.

A phone company in the UK that assigns phone numbers using the groups and now will lose all those phone designations when it's deleted.

Report this ad

A Birding group in new Delhi with 2,000 members that has collected data and research on birds for TWO DECADES.

An Adoption group in France, that has been using it for years and years to communicate and share history and photos and more.

They also would have found: Numerous support groups for people who are suicidal or depressed.

Numerous medical groups for people to communicate more effectively with their doctors.

Numerous Vet groups with 24 hr care advice for sick pets.

Numerous support and help groups for the Elderly.

Numerous Historical groups for WW2 Veterans, Vietnam Veterans, and etc.

Numerous science groups that have used them for years and have all their research there.

Numerous fan fiction groups or arts groups that have shared their work for years.

No Verizon, these groups are NOT largely unused. You just didn't do your homework. You didn't find us, who could have told you they were used all the time. You didn't make an effort to understand what groups were, and how they were used. You just decided they weren't being used, and weren't important and decided, "Hey, let's just delete them!"

Report this ad

So not researching thoroughly, and probably listening to Yahoo's telling you they weren't being used was your first mistake.

The second mistake, when the intial plan to delete our archives was hatched, some people learned it on October 17th. Not everyone did. That's because it was very quiet and low key, and not everyone goes to the website all the time. Many use it when they need to search for something, or check out a photo. The point is it's a mail list with an archive.

Some e-mails were sent to members, and perhaps maybe all were sent, but believe me, all did not arrive on that date. People began finding out by receiving letters days, weeks, and even a full month later. This is because Groups e-mail is unreliable. Period. It has been unreliable since Mayer broke it in 2013.

Even those learning it on October 17th, had a very small window of time to save their group. It was a period of 58 days, less than 2 months. That was never going to be enough time for over 30,000 groups to save their archives.

As you said, photos were not downloaded from the groups as a group. So anyone who wanted them had to manually save them by hand. And there are groups with thousands or hundreds of thousands of photos, especially art groups and photography groups. It couldn't be done in the time we were given.

This delay in us finding out, left barely weeks for some people to act. Thus the panic all over the Internet, which Verizon would see if they would just look, was immediate and furious. They came to me, because I stopped the destruction of Yahoo Groups in 2010, and have been crusading against Yahoo for their unethical treatment of us, and fighting to protect our archives for 6 years.

Report this ad

So, as I had kept the Crusade blog up and the Crusade groups open, they all fled back to me. And this time, it wasn't that we couldn't get them [our archives] and leave, and it wasn't that they had broken access. This time Verizon planned total destruction. This time it was for keeps. So of course, I had to start the Crusade again and take it to high gear.

So the best thing Verizon could do, since they are just going to throw us all into the trash anyway, as we aren't important to them, is let us get our archives any way we can.

The terms of service really should not apply to people who have been told, we're gonna delete you from existence. If it's lawful for us to get them from you, in broken buggy and virus ridden state, it's just as lawful for us to get them ourselves.

Yahoo made a huge mess of us. I tried to warn Verizon even before they purchased it. They did not listen. Now they have the mess on their hands, and we will not be silent. Even if Verizon really does delete us out of existence, we will never stop telling the world what they and Yahoo did. Because it's high time Yahoo Groups Users start being treated as human beings again.

Bottom line. Verizon is being unfair to a group of people using a property they purchased, they are being unethical in the manner in which they announced it, and unreasonable in the time that they gave us to accomplish it.

So we demand fair and equal treatment, and will stand up for it until we get it.

Report this ad


Brenda Fowler

Ooohkay-then. Thats how they want to be. As you know, the Owl's gloves are off and this really nails it home how complicit Verizon is going to be. At the same time, press has become aware of Verizon/Yahoo's choice to become another statistic for bad PR over the matter. We too have this power, and some articles about our cause are already pending over this weekend.

It is time that we take this up a notch. Archiving is still going on through the various teams, but we can begin the fight on another side of this scenario. Won't you join us?

All Comments: [-] | anchor

empath75(1877) about 18 hours ago [-]

When I was at aol I tried to get them to open source the q link server code from the 1980s. Someone actually got it on DVD for me and everything but after the Verizon merger they fired the entire legal team that was responsible for authorizing open source release and it just stalled.

h2odragon(4219) about 17 hours ago [-]

>open source the q link server code

what a lovely thought. Thanks for the effort, even tho it didnt pan out. if you've got the dvd torrent it out :)

now im wondering if there's a stratus emulator anywhere and/or the os code. Them things were nasty... individually battery backed hard drives was just the beginning. The slot cards looked like someone had dumped yellow patchwire spaghetti all over them.

tempestn(1477) about 17 hours ago [-]

Open sourcing code can be tricky—there's quite a bit of review that needs to go into doing it right, as well as more work if you want the release to actually be reasonably useful. Blocking this archiving effort is on a whole other level. We're talking about saving information that was already public. All they have to do to allow this to happen is... nothing. I can't comprehend why Verizon/Yahoo would go out of their way to block these efforts.

aspaceman(10000) about 17 hours ago [-]

They want to wipe their hands clean of it. They don't want a record of it.

Diagon(10000) about 17 hours ago [-]

Is this something you know from the inside, or your (probably good) guess?

zfxfr(10000) about 8 hours ago [-]

There must be something I am missing somewhere.

1) I have been a member of a group for many years (Gann study group) . Last Friday I received a notification from the owner who was explaining the group was closing so he set up a new one somewhere else. I thought it would be nice if I made a backup. So I found a python script on github (there are dozen of scripts in various languages which can be used to backup a yahoo group there). It took me a couple of minute to get it working and then a while later. Voila ! I had it nicely packed on my hard drive. So why is it so hard to back up a group? I don't understand the problem.

2) 'A phone company in the UK that assigns phone numbers using the groups and now will lose all those phone designations when it's deleted.'

What? Well OK why not.. But? They are a phone company. There must be someone able to scrape all this data? I don't get it? There are so many ways to extract data from yahoo group.

Diagon(10000) about 8 hours ago [-]

Most people running these groups are not technical. Even if they got the word in time, the only option many of them could find was PGOnline, a Win pay software, which by this point Yahoo has blocked. Furthermore, even if they got it, what do they do with it? For many groups, the archives are a resource to be referred to. They need to be hosted somewhere, preferably with some kind of front-end search engine. Even better if the search engine integrates with any new posts on the forum they move to.

The Archive Team has been taking requests for backups of groups for people who don't have the technical facility to run the python scripts. They then intend to make them available on the internet archive. The next project is making some kind of front end, in case group owners want to host that somewhere. Some of us, for example, will be doing that behind some kind of a forum login, so it won't be search engine indexed.

As for your point 2, that was cut/pasted from the link in the OP, where it's describing that many groups are still using the platform. More relevant to this project, is that many groups are losing their archives, and those archives contain anything from scientific data, to hobbyist & howto information, to art and literature, etc.

shrubble(10000) about 18 hours ago [-]

I wonder if a small scraper script that an existing member of the group could download and run under their existing, valid account, would work?

Like a TCL language 'starpack' , a single binary Go program or something else?

Can you script a browser to do some crawling for you?

y4mi(10000) about 18 hours ago [-]

Like.... Selenium or puppeteer?

Diagon(10000) about 18 hours ago [-]

There's way too much data. Many group owners did not know of the shutdown (Yahoo was negligent regarding informing owners), and even if they did many group owners have little or no technical capability. That's why so many requested of the Archive Team that their groups be archived.

lucb1e(2135) about 16 hours ago [-]

That's exactly what they're doing, and then they get CAPTCHAs and accounts banned.

Diagon(10000) about 19 hours ago [-]

Extensive history is about to be lost. Despite being broken, may organizations still use it. Examples from that post:

A police cooperative in Washington DC that was using them as a network to communicate with their respective neighborhoods with over 17,000 members.

A phone company in the UK that assigns phone numbers using the groups and now will lose all those phone designations when it's deleted.

A Birding group in new Delhi with 2,000 members that has collected data and research on birds for TWO DECADES.

An Adoption group in France, that has been using it for years and years to communicate and share history and photos and more.

They also would have found: Numerous support groups for people who are suicidal or depressed.

Numerous medical groups for people to communicate more effectively with their doctors.

Numerous Vet groups with 24 hr care advice for sick pets.

Numerous support and help groups for the Elderly.

Numerous Historical groups for WW2 Veterans, Vietnam Veterans, and etc.

Numerous science groups that have used them for years and have all their research there.

Numerous fan fiction groups or arts groups that have shared their work for years.

tcd(10000) about 18 hours ago [-]

And this is the dangers of relying on a private, corporate, for-profit law-bound organization. They're susceptible to abiding by the laws and of course, there is a cost attached to all of this.

Exploiting a free resource, as we all do these days (reddit, youtube, facebook, hackernews itself etc) is all well and good but maintaining history is expensive (content needs moderating, you are required to abide by the GDPR and DMCA, there may be disputes about content on the platform).

I mean, Google+, MySpace, Bebo, IMDB comments is now dead and gone, how useful was the data really? I'm sure some people might go to archives but I would imagine 95% of the data is just 'rot' that has no value or substance.

History is lost all the time, we barely know what we've been up to the last few thousand years only now can we so extensively document our world with the precision and quality afforded to us.

But in the end, time moves on and some of that history is lost, it hurts, but whose to say any archived history will be preserved anyhow? We're still relying on our storage technology being readable years/decades/centuries from now, which is not a given.

fred_is_fred(4148) about 14 hours ago [-]

Why wouldn't these groups for which Yahoo seems to be a critical service have done this work weeks ago? Either it's important and you make an effort yourself, or you do nothing, which clearly indicates that it is not important. I'm having a hard time getting worked into a lather over this one like it seems everyone else is - it's been announced for 2 months - ample time to save what you needed.

ec109685(4192) about 10 hours ago [-]

Not to be flippant, but wouldn't one of the members of these groups have a copy of the group in their email? Given gmail and whatnot store things virtually indefinitely, couldn't the contents be recovered that way?


tedunangst(4197) about 18 hours ago [-]

> A phone company in the UK that assigns phone numbers using the groups and now will lose all those phone designations when it's deleted.

Wow, somebody invented a database that's even worse than an Excel file on a network share.

(Also, how are they going to assign new numbers when archive.org takes over? Is archive.org going to give them write access?)

umeshunni(3471) about 18 hours ago [-]

I'm confused why Archive.org is attempting to archive and expose to the public what is essentially private communications?

My usage of Yahoo groups in the early 2000s was mostly to communicate with my high school / college / dorm groups and the last thing I want is for embarrassing messages from 20 years ago sent to a private group to be archived.

saagarjha(10000) about 18 hours ago [-]

Yahoo Groups can be configured to allow public access.

Diagon(10000) about 18 hours ago [-]

Many groups are public. Many have owners that requested the group be archived.

rahuldottech(1424) about 18 hours ago [-]

They are only archiving data on publicly accessible groups - many of which contain lots of discussions worth archiving.

betamaxthetape(10000) about 17 hours ago [-]

Clarification - we're not archive.org. Archive Team and Internet Archive are completely separate.

And we're only archiving things that 'any guy on the internet' can see. If someone can access the messages simply by joining a group (with no moderator approval), I'd argue it's fair game.

We're not going to be unreasonable, though. If something private slips through and we receive a takedown request from the author, we typically remove it.

dependenttypes(10000) about 16 hours ago [-]

> what is essentially private communications

If they have access to it, it is not private.

fl0under(10000) about 13 hours ago [-]

I had just recently been reading about Arweave [0], a sort of distributed file storage that claims to permanently store files/webpages using various incentives.

Seems like something like this would be a good way to archive this sort of information or build sites like Yahoo groups on top of this file storage in the first place.

[0] https://www.arweave.org/

kevingadd(3785) about 13 hours ago [-]

Storage isn't really the problem in this case, collecting the data is the problem because yahoo/verizon are actively hostile.

companyhen(4174) about 13 hours ago [-]

Arweave is doing great stuff but I think it'd still run into a similar situation as archive.org -- Check out the arweave discord dev community if you haven't though!

oieoeireoes(10000) about 13 hours ago [-]

Verizon claimed that the archivists violated the 'terms of service' [1], but I couldn't find any reference to automation, downloading, crawling, or denial of service attacks that might apply.

Does anyone have an idea of exactly what term or terms were violated by the archivists?

[1] https://www.verizonmedia.com/policies/us/en/verizonmedia/ter...

kevingadd(3785) about 13 hours ago [-]

AFAIK they hadn't started doing mass-archiving either. They were still setting up.

arianestrasse(10000) about 3 hours ago [-]

Just playing a devil's advocate here. The way archivists are downloading the data can be said to disrupt the services, which is mentioned in the terms of service:

2. d. viii: 'interfere with or disrupt the Services or servers, systems or networks connected to the Services in any way.'

I'd also like to point out that the apparent spokesperson Brenda Fowler said in her open letter to Verizon, that 'If the problem is that all our attempts to rescue our archives in the time we have left is causing an overload or strain on your servers, then stop making us HAVE to work around the clock, and GIVE US MORE TIME. ...' Probably not the wisest thing to say right now.

Also, archiving the groups with automated tools is against the Use of Services rule, that states the following:

2. e: 'Use of Services. You must follow any guidelines or policies associated with the Services. You must not misuse or interfere with the Services or try to access them using a method other than the interface and the instructions that we provide. ...'

As I mentioned in another comment, I really support the cause and am a big fan of archiving myself but it's unfortunately quite clear that Verizon is right at calling out the violations of 'terms of service'.

lonelappde(10000) about 14 hours ago [-]

The lesson here isn't to run a hail Mary effort at the last day to save Yahoo Groups. The lesson here is to backup things you care about before they start to disappear.

Is this stuff so important if not one of millions of users thought it was worth putting effort into saving?

Yahoo's takeover/shutdown was announced years ago.

Diagon(10000) about 6 hours ago [-]

Inertia. I'm connected with a series of groups and I tried to get people to move for years. It just doesn't seem anything happens except under pressure.

Also, the scripts really have been developing during this last 2 months, which is all the time they gave us when they said they'd be deleting all content. Until then, I didn't see anything that would get me all content including files/links/calendars/databases.

caymanjim(3742) about 14 hours ago [-]

I don't see this as a big loss, and if I were someone who'd posted on Yahoo Groups, I'd be happy if it all disappeared. I don't consider this kind of content something that should be durable and everlasting. It's ephemeral conversation. If anything worth saving comes of conversation, it should be converted to another form and saved.

I'm glad I'm old enough that most of what I wrote on message boards as a teenager and young adult disappeared long before archive.org and similar sites existed. Conversations shouldn't last forever.

btashton(4046) about 14 hours ago [-]

I'm part of an opensource project that has been around for a long time, the mailing lists (for good or bad) lived there for quite some time. There is a lot of design and troubleshooting history I would rather not be lost forever.

gumby(2113) about 19 hours ago [-]

What is Verizon's motivation for taking steps to prevent it?

jedberg(2257) about 18 hours ago [-]

Cost savings, plain and simple. Less bandwidth, fewer servers.

jsjohnst(3812) about 18 hours ago [-]

GDPR, CCPA, <insert other regulation>, all are possible reasons to throw their hands in the air rather than do the work / endure the possible risk.

Very possibly the timing isn't a coincidence, being CCPA is about to take effect.

raverbashing(3723) about 18 hours ago [-]

I find it curious that at the same time we discuss the 'right to be forgotten' laws there's also the opposite problem of preventing the internet from forgetting something.

userbinator(737) about 18 hours ago [-]

That's why I'm not a fan of those laws --- in addition to the fact that in practice they turn into something more like 'right to rewrite history'.

dependenttypes(10000) about 16 hours ago [-]

'right to be forgotten' laws are the result of the whole 'numbers have owners' insanity, combined with the fact that the average person will mindlessly use random services to store private data.

coldpie(2567) about 17 hours ago [-]

Yes! They're both really interesting problems and there's no right answer! You're right to be curious, it's a fascinating set of issues.

okasaki(3662) about 14 hours ago [-]

I wonder if they're actually deleting it, or just making it inaccessible to the public?

PopeRigby(10000) about 13 hours ago [-]

They're most likely deleting it. There's not much point in them hosting all that data when it costs money to do so.

Thorentis(10000) about 17 hours ago [-]

I'm genuinely curious from an ideological perspective, why archivists think all this material is worth saving?

People often compare the shutting down of sites or the banning of content (e.g. When Tumblr banned porn, or now yahoo shutting down groups) to the burning of the Library of Alexandria. But there is a huge difference. The LoA held knowledge collated and collected by the best thinkers of the time. The Internet is not that. The Internet is an open platform where anybody can say anything like that. Most comment sections are filled with all sorts of material ranging from factual to entirely fictional.

I realise it is hard to decide what is worth keeping (and therefore erring on the side of saving it all), but I'd wager that the vast majority of archived content is not useful at all. The Wayback machine is a perfect example. Lots of great stuff, but that's a drop in the bucket compared to the vast amounts of useless, or even redundant information stored.

It is a lot of resources thrown at saving, not the equivalent of the Library of Alexandria, but the public toilet block graffiti wall.

Anybody want to share what drives them to do this?

gdulli(10000) about 16 hours ago [-]

We don't think it's necessary to preserve everything that's ever spoken verbally. We don't lament that everyday conversation is ephemeral.

People are conflating internet discussion content with written content because it's stored as text. Whereas the more legitimate comparison is to verbal communication.

shortformblog(3087) about 14 hours ago [-]

One man's public toilet block graffiti wall is another's Library of Alexandria. Let the historians and journalists decide what's important and the archivists take their best crack at saving it.

I write a lot of historical content and often the most useful stuff I find—for example, old flyers or ads from the 1950s or 1960s—would have been considered trash by someone at the time.

So an archivist's job isn't to make a judgment. It's to protect the data as they see fit.

Diagon(10000) about 17 hours ago [-]

See below. My main concern is early medical/biohacking groups that shared data, like medical tests, and engaged in extensive discussion/community driven research. Such groups go back to at least the late 1990's.

A main concern of the Archive Group (again, below) is art that was uploaded there.

I'm sure those are not the only two classes of examples. See for example the bird watching group in Delhi that has been collecting data for decades. (In the link of the OP.)

CamperBob2(10000) about 16 hours ago [-]

It is a lot of resources thrown at saving, not the equivalent of the Library of Alexandria, but the public toilet block graffiti wall.

Ask an antiquarian about the value of graffiti in the ruins of Pompeii and other archaeological sites sometime. The great historians of the day wrote about their contemporary culture, while the vandals and miscreants and lowlifes and commoners contributed to that culture. Having access to both sources gives us a much more complete picture.

You don't know what's worth saving at the time you save it.

Nition(3710) about 17 hours ago [-]

Step 1: We only need to archive the genuinely good content.

Step 2: It will take a long time to look through all this content and determine which parts deserve keeping.

Step 3: We will inevitably leave out something that someone else thinks is worth keeping anyway.

Step 4: Let's just archive everything.

ddingus(4174) about 14 hours ago [-]

One never knows what may have value.

The graffiti on the toilet wall may well speak to the start of a trend, term, movement, or other event, for example.

Think longer timelines, broader scope than you personally may feel is relevant.

En mass, those questions have answers we individually are unlikely to fathom.

pariahHN(10000) about 17 hours ago [-]

Even if we still had the Library of Alexandria, it may have shed zero light on the actual lives of citizens. Archiving content on the internet means capturing thousands of individual level perspectives and experiences. We don't know what will end up being important to historians 50 or 100 years from now. I would bet there are dozens if not hundreds of historians that would give anything for a record of their favorite time period that contains even a fraction of the amount of content today's archive efforts are storing.

It's also not horrendously expensive - we are getting better and better at storage as well data analysis techniques, so stuff that seems useless today may be useful 50 years from now and cost less to store than it does now. The key thing again being that we can't benefit from hindsight.

Even graffiti can give insight into a time period, even if that insight is that that time period had an unusually high number of graffiti artists.

patcon(10000) about 17 hours ago [-]

Great question! I'll take an amateur swing at a decent answer:

People doing important work (esp important work that is underfunded) don't have time to write/record their own histories. But that history can be instructive, to learn what worked and what didn't, and help future travellers do it better :)

And perhaps especially important: ppl engaging in these under-resourced efforts are often working in domains that capitalism is... less curious about, we'll just say. Otherwise, it would likely be able to be more highly documented, as incentive is there to preserve it.

Our ability to improve our present from better understanding our past is a supposed benefit of a digital world that accrues data -- we have records of things that in prior ages just flew by in conversation (for better or for worse). But efforts like this rob us all of that wisdom <3

And again, there is an asymmetry in who gets robbed. It is often the folks working in the commons, those doing invisible maintenance labour (nonprofits, grassroots, community), and generally just people doing work within the cracks of capitalism.

WalterBright(4126) about 16 hours ago [-]

> I'm genuinely curious from an ideological perspective, why archivists think all this material is worth saving?

It's easier to just save it all and let gawd sort it out.

You never know what some future person might find interesting. For example, my father took lots and lots of pictures, but they're all set in the living room and kitchen. No pictures of the rest of the house. I'm sure the thought of photographing other rooms simply never occurred to him as being interesting.

For another example, many people are interested in where/when/why certain words first appeared, like the origin of 'OK'. Massive archives of text that are searchable would help with this.

marapuru(10000) about 2 hours ago [-]

> The LoA held knowledge collated and collected by the best thinkers of the time.

... that had access to writing services and were wealthy enough to have their thoughts stored.

There could have been many odd voices out there that would've told us an entire different story. But these are unknown because they didn't have access.

Now we are in the era of (almost) universal access to storing our thoughts and we still don't listen to the everyone or mark them as uninteresting and not worthy.

frustyycomb(10000) about 16 hours ago [-]

yahoo is going to keep the messages but just delete the art and other uploads or attachments to the messages correct? although apparently they will make some groups private as well essentially closing access.

lsiebert(4000) about 11 hours ago [-]

Nope, they are deleting everything

dredmorbius(199) about 4 hours ago [-]

A suggestion I'd made during the G+ shutdown, and active interference (see: https://old.reddit.com/r/plexodus/comments/b87hpi/googles_at...), was that legal methods be employed.

An injunction filed on behalf of Yahoo Groups users to maintain access, delay deletion, and facilitate archival, specifically.

Is anyone working on this?

Diagon(10000) about 3 hours ago [-]

Not that I know of. Lawyers (hopefully not guns) and money would be needed. Suggestions?

dessant(3373) about 18 hours ago [-]

What prevents Verizon from donating the Yahoo Groups database to the Internet Archive? What does Verizon have to gain from preventing the archival of Yahoo Groups?

user5994461(2914) about 18 hours ago [-]

It's simply way too much work. Dying projects generating no revenues don't get the luxury of having tens of people assigned to work on them.

blihp(10000) about 18 hours ago [-]

Companies don't typically operate that way. All else being equal (especially when there's no $$$ in it for them) when given the choice between doing something and doing nothing, they usually choose to do nothing. It's often not malicious, but an overabundance of caution. (i.e. lawyers raising red flags about liability, 'our IP' etc... it's a real pain even from the inside getting large companies to do anything different from the status quo)

My bet would be that Verizon's network monitoring system/team sees the archive team's attempts as some sort of anomaly to be stopped. It's possible, though I wouldn't bet on it given Verizon's history re: public relations, that making noise might alter the equation and get them to allow the archive team to continue.

tcd(10000) about 18 hours ago [-]

I can imagine it's easier and safer (from a legal perspective) to just delete the data and therefore no longer be responsible for the content. Twitter wants to delete older Twitter accounts because they're required to by law under the GDPR.

I mean, the GDPR makes things kind of difficult in this regard, and I suspect even archives are liable if somebody takes an issue with content they are hosting.

mindslight(10000) about 18 hours ago [-]

One of Verizon's spokespeople was literally Darth Vader. 'Ma Bell has you by the calls'.

Large corporations are not anthropomorphic entities, regardless of their disarming branding. Rather they are amoral bureaucracies, likely administered by people who have learned to ignore their empathy to get there. Verizon won't change course to accommodate the Internet Archive or general Internet community any more than a combine would pause for a field mouse.

patcon(10000) about 18 hours ago [-]

Maybe those who care (we?) could organize a campaign to get customers to commit to leaving Verizon if they let the messages be deleted without archive? That would convert it into the language they understand.

To raise the perceived threat level, many folks could support in building tooling or docs to help ppl migrate as easily and streamlined as possible, to minimize the tax on consumer time that they rely on. (E.g., help on comparable plans, cheat sheet for call centre keywords, etc.)

Maybe something team 'Do Not Pay' could help run with...! [1]

[1]: https://boingboing.net/2019/10/28/parking-tickets-plus-plus....

betamaxthetape(10000) about 19 hours ago [-]

disclaimer: I'm a Member of Archive Team who's helping coordinate the joining of Yahoo Groups in preparation for archival.

Yahoo's banning of a large amount of the accounts we were using is a huge setback for us. In total we lost over access to over 55,000 Yahoo Groups, many of these will now not be archived and will be lost when Yahoo deletes everything on December 14.

Particularly disastrous was the loss of access to all of the 30,000 Fandom (fanfic / fanart / etc..) groups that were requested to be archived by members of the fandom community. We're back to square one now, and it is looking increasingly likely that we're only going to be able to re-join (and therefore archive) a small percentage of these groups before December 14.

(And now for the inevitable, shameless plug...) We could really use some help! If you've got an hour or so, we could really use people to come and complete CAPTCHAs for us. (A CAPTCHA is needed to join every group). Instructions at: https://github.com/davidferguson/yahoogroups-joiner

tootie(10000) about 17 hours ago [-]

Is there any cited reason for the groups they're blocking?

scarejunba(10000) about 18 hours ago [-]

I imagine you guys already know this but considering we're up against the timeline, I'd use the captcha solving service (easy to google yourself) and Luminati to distribute the IP addresses while swallowing my ethical qualms.

mehhh(10000) about 19 hours ago [-]

Have you considered using NordVPN for CAPTCHA bypass? They are a shady company, but their network of residential VPNs is impressive.

jstanley(982) about 16 hours ago [-]

I tried to do this but upon clicking the purple 'Join Group' button Yahoo is giving me an error saying my email address is not linked to a Yahoo account:

> Your email address is not linked to a Yahoo ID. To join this group, you need to link your email address to a Yahoo account.

When I click 'link your email address', it just takes me to a page called 'Personal info' which doesn't have any obvious way to link my email address.

So I'm not sure how to proceed.

EDIT: Solved it. I had initially only 'verified' the account with a phone number, but you have to add an email address as well. It's now working.

For anyone who, like me, signed up for this and filled in the Google form, but then couldn't find the leaderboard URL after closing the tab, it is https://df58.host.cs.st-andrews.ac.uk/yahoogroups/leaderboar...

It seems to be working through a list in reverse alphabetical order. Watching the progress being made is quite satisfying. When I started it was on groups like 'sciencefiction' and now it's moved on to 'petzluverz'.

ar-jan(10000) about 17 hours ago [-]

Just solved a bunch of captchas, but Chrome crashed a few times during. Due to the addon?

ar-jan(10000) about 17 hours ago [-]

btw, maybe Mechanical Turk could help with the captcha part?

Angostura(3660) about 5 hours ago [-]

Have you posted this on Reddit anywhere? Possibly /technology?

You might even get the admins to make an announcement.

john_moscow(4219) about 18 hours ago [-]

Forgive my naivety, but why would blocking of your accounts delete the data you have already backed up? This sounds like you are doing it the wrong WAY, IMO.

Diagon(10000) about 18 hours ago [-]

While the above post is concerned with Fandom groups, my concern is with groups that started doing early community driven biohacking type research. There are medical tests results and discussions of medical interventions. While that's my focus, I'm sure there's additiona important material. We really need to save this data.

yots(10000) about 18 hours ago [-]

FYI: The extension offers many private groups that I can't join without approval and that seems to disrupt the flow of the extensions.

wanderer2323(10000) about 13 hours ago [-]

It went pretty good for the first 10-20 or so groups but now I get the multiples of the really annoying captchas (click until none remain) per group... Damnit yahoo...

rkagerer(4087) about 13 hours ago [-]

Thanks for fighting the good fight!

I assumed I could help by going to a web page and solving a bunch of captchas for you, but when I read those instructions I found there's more involved (forging a Yahoo account, installing an extension) and it turned me off.

If captcha's are the bottleneck, maybe some generous soul here could figure out a way to automate the rest and just give me a page I can go solve captchas? Further reducing the friction might help get you some more uptick from the community - more monkeys like me banging at typewriters.

Sorry I wasn't more help, and best of luck with your efforts.

qxnqd(10000) about 18 hours ago [-]

Why does everything need to be archived? Why can't the stupid things I said 20 years ago in a forum just vanish someday?

(I never posted there but you get my point)

dbtx(10000) about 14 hours ago [-]

The one group I ever joined held plenty of useful/unique SysEx dumps containing custom patches for a popular 80's music synthesizer, among related things. I wonder who has already backed it up, and if I should.

edit: Oops, I'm also a member of LTspice. D'oh!

The best way to stop being ashamed of stupid things that you said forever ago is not to cast those things into the Memory Hole, but to stop saying those things, and most importantly, stop being the person who would. Then you know it's in the past, and it doesn't matter who else remembers.

I'll let you know how that goes, someday ;)

Diagon(10000) about 18 hours ago [-]

It's not stupid. There were serious groups using that platform. While I never thought it was a good idea, they nevertheless did. My personal concern is community driven medical/biohacking research groups that go back to at least the late 1990's.

dependenttypes(10000) about 16 hours ago [-]

> Why can't the stupid things I said 20 years ago in a forum just vanish someday?

Because some might want to read them or use them in some form.

egfx(4175) about 13 hours ago [-]

In the early 2000's there existed two main ecosystems in mobile software J2ME and BREW (not counting Symbian) the latter BREW, operated by Verizon. I had cofounded a QA consulting company that heavily based itself off BREW's highly extensive developer portal. Then one day without warning, the developer portal disappeared. Luckily I had the foresight to download all the documentation a week before. My cofounder, a Microsoft developer was dumbfounded.

Diagon(10000) about 13 hours ago [-]

Yes, this was incredibly sudden, and with not support for getting out. They gave 13 days notice of intention to shut down new additions to message archives (extended to 20 days after some commotion). That was October 21, I believe. They have offered a broken group downloader that produces incomplete results. Desperate group owners have been using a Windows piece of software called PGDownload, but Verizon has blocked that. Now the only organized effort is being actively interfered with. Dumbfounding is indeed the word.

jedberg(2257) about 18 hours ago [-]

It's like the burning of the Library of Alexandria all over again.

We don't know exactly what was in the library when it burned. We assume it was all great works of intellectualism, but it could very well have been the fanfics of their time.

narrator(10000) about 18 hours ago [-]

I think one of the unintended consequences of privacy legislation is it will support the burning the library of Alexandria over and over again.

The default corporate posture will be : Delete all the data! It's a liability and figuring out what we can keep is an enormous headache.

piroux(10000) about 18 hours ago [-]

Except that the Library of Alexandria never actuelly burnt ! That is a very good ol' myth ;)

- https://www.firstthings.com/web-exclusives/2010/06/the-perni...

- https://www.ancientworldmagazine.com/articles/making-myth-li...

- https://history.stackexchange.com/questions/677/what-knowled...

But anyway, no one should delete human littérature, be it inadvertently or by lack of effort.

Waterluvian(4118) about 18 hours ago [-]

Yahoo Answers is an invaluable trove of insight into an intellectual class of people that I think a lot of us regularly forget exist.

pmoriarty(51) about 18 hours ago [-]

There have to be some Verizon or Yahoo employees on HN who are reading this.

Can any of you shed some light on why Verizon and Yahoo aren't cooperating with the Archive Team to archive this valuable historical content?

(If you don't feel comfortable commenting with your regular HN account, maybe you could do so with a throwaway account?)

Also, is it possible for any of you to bring this issue to the attention of upper management and help them understand how important it is to archive this?

You Verizon/Yahoo employees have much more power to make a difference here than anyone of us from the outside can.

logicallee(4126) about 8 hours ago [-]

how much storage do you think in total all of the Yahoo Groups content takes?

ygthrowaway(10000) about 17 hours ago [-]

Probably not very helpful/informational but:

I work for VzM, but not historically directly on Yahoo products (product teams have been merged/consolidated etc. over the past few years, but there's still strong tendencies toward products people came from).

So I wouldn't be very clued into what's happening with Yahoo Groups internally. And I've heard nothing about this internally. At all.

As it stands, it's 2:30pm in SV, VzM is top of the HN frontpage, and not a single soul has mentioned it yet on internal Slack.

Will see if I can find out more.

john_moscow(4219) about 12 hours ago [-]

Pure speculation, but if you publish something created by another person without an explicit permission by them, it may open you up for a lawsuit. If some groups required explicit approval by a moderator in order to read the posts, I would take it as they didn't want the content to go public.

So technically, some legal troll could post some copyrighted information, wait for it to be published on Archive, and then sue Archive for copyright infringement and Verizon for assisting it. As a non-profit, Archive will likely get away with just taking it down, but a for-profit Verizon is a wholly different story.

oneepic(10000) about 11 hours ago [-]

I wonder if this would be a workable idea:

Create a service for long-term storage with an easy integration API; the idea would be that if you integrate your data with our service, and you eventually (maybe you're going out of business, or something) make a call to delete data, that data is first transferred to our service before deleting it on your end.

Integrating with us is basically like making a reservation in advance, so when you do perform the big delete like what's happening to these groups, it's offloaded to this service first.

I have no good idea about how to store/structure the data, or how it would make money. But I also have no idea if there's an easier solution to problems like this, where you force users to scramble to save all their stuff somewhere. People would also begin judging services by whether their data will be saved once it's terminated (ie whether you integrate with us or not), so I feel like that would ultimately bring in a lot of customers.

BlueTemplar(10000) about 10 hours ago [-]

P2P software has solved this problem a while ago?

droithomme(10000) about 4 hours ago [-]

Yahoo has sent emails to everyone allowing them to click a button and get a zip file prepared with everything they have posted or stored on every Yahoo Group they have ever participated in.

Blog author and people here believe that outsiders with no connection to these groups have an inherent right to download and republish all this information despite having no license to do so. Yahoo/Verizon seems to think differently.

betamaxthetape(10000) about 4 hours ago [-]

There have been lots of reports that this 'Get My Data' function doesn't work. One of the demands made by the blog author is for Yahoo to fix this so it actually does work.

Additionally, the 'Get My Data' only gives you access to all the files / photos that you uploaded to the group. These archives should not be considered complete archives of the group - is it really reasonable to suggest that in order to completely back up a group, every member must complete their own data request?

jijji(10000) about 16 hours ago [-]

what a PR mess for verizon

fortran77(3626) about 13 hours ago [-]

Not really. Few people today care about Yahoo Groups, and I suspect many people wish old posts they made to the Internet would just 'disappear.'

gatherhunterer(4220) about 16 hours ago [-]

The current administration put Verizon's chief counsel into the position of FCC Chairman. I would not expect Verizon to answer to anyone.

Also, it is shame that the person in direct contact with Yahoo over this is sending angry emails in all caps. The Internet Archive deserves better.

Diagon(10000) about 15 hours ago [-]

I agree on the first point. The second is perhaps understandable if you read the whole exchange. You know they initially gave us 13 days before they cut off storing any more of the group emails (that is, new emails)? With an outcry, they increased that to 20. Many thousands of people were scrambling to find a new home. We are now reaching the end of the line (the last week) before the archives themselves are gone, and they have blocked the main concerted attempt to save some of that history. So, some level of frustration is in order.

ajnin(4067) about 15 hours ago [-]

What was the original plan exactly ? Subscribe to as many groups as possible and then wait until the last moment to grab the data ? That would almost certainly have resulted in massive bandwidth problems and massive bans by Verizon in response at this point, failing the archival effort anyway.

CriticalCathed(10000) about 14 hours ago [-]

Plan was to organize it, then carefully and thoughtfully balance out the load so that it didn't put undue burden on their servers. The entire plan was orchestrated in this way so that it wouldn't cause problems.

teovall(10000) about 14 hours ago [-]

The archiving scripts have been under development and testing since Yahoo first made their announcement. The plan was to get as much manual labor (volunteers solving CAPTCHAs to join groups) done while waiting for the scripts to be stable, reliable, and automatable.

Dignium(10000) about 15 hours ago [-]

IDK if this is any help, Verizon is holding their annual conference On Dec 10th (less 2 days away as of this writing), with C.E.O. Hans Vestberg presenting at 12:15 EST.


Maybe someone can pipe up at the conference.

Diagon(10000) about 12 hours ago [-]

Dignium - is that an online conference? Do you need to own a share to get access? I'm having trouble making it out.

userbinator(737) about 18 hours ago [-]

The 'dark side' of web scrapers has always been one step ahead with things like IP bans and CAPTCHA solvers, maybe it's time to get their assistance... as the old saying goes, 'an enemy of an enemy is a friend'.

PopeRigby(10000) about 13 hours ago [-]

Who are the dark side of web scrapers?

lazzlazzlazz(10000) about 16 hours ago [-]

This is a wake-up call to the entire world: we cannot take internet history for granted. We need affordable, decentralized means with long-term economic incentives to archive the digital world.

In a way, the digital world is far more fragile than the physical world. And the time to solve this is now.

8bitsrule(4012) about 12 hours ago [-]

Tragedy of The Cloud.

IIRC, Archive.org is still running its fundraiser today.

We need LOTS of publicly-sponsored and paid-for digital archival centers that, like libraries, are maintained for the common welfare. Or we could, you know, add that duty (and funding) to existing libraries! With -paid- archivists!

paulcole(4192) about 14 hours ago [-]

Verizon isn't a charity. Why would they offer any assistance?

danShumway(4077) about 13 hours ago [-]

They're also not monetizing the content or doing anything with it. They're just going to throw it away. Why would they go out of their way to block archival attempts?

This is the corporate equivalent of throwing your old computer into an empty ditch on the side of the road, and getting mad when someone responsible comes by to recycle it for you.

8bitsrule(4012) about 12 hours ago [-]

Oh, I dunno. Maybe in the interests of joining the human race?

johannes1234321(4088) about 13 hours ago [-]

This would fall under 'marketing' from a accountant's perspective.

Dolores12(10000) about 16 hours ago [-]

Recently Verizon have blocked all of my yahoo accounts. I've spent some time trying to find any kind of support form to get them restored with no luck. To get support you need pay money now. Perhaps, Archive.org accounts fell under the same ban.

betamaxthetape(10000) about 16 hours ago [-]

Verizon has stated in support emails that they were aware of Archive Team's efforts and specifically will not be un-banning our accounts.[0] I therefore think it likely that the banning was targeted.

[0] https://modsandmembersblog.wordpress.com/2019/12/08/verizon-...

Diagon(10000) about 16 hours ago [-]

We have groups that the owners can't even access any more. They demand a yahoo email even when there's a non-Yahoo email associated with it. Y-Groups has been badly broken for some time.

paggle(10000) about 18 hours ago [-]

I am a self interested party, but I'm personally glad since there's a post in a Yahoo Group that's findable through Google, that would absolutely ruin my reputation and life if discovered.

Diagon(10000) about 17 hours ago [-]

Individuals have always been able to delete their own posts. If you log in there, you can still do it (before the 14th).

Also, see betamaxthetape, above. If anything is archived, they will respond to takedown requests.

a3n(3313) about 10 hours ago [-]

Don't use free corporate services for shit you care about. Or think you may care about later.

Don't use any service that suffers from a single point of control.

How much anguish when Facebook inevitably either goes away or pivots entirely?

Or HN, for that matter?

slenk(10000) about 10 hours ago [-]

I don't believe that point is necessarily up for debate. At this point we are just trying to save the data that we know will be lost.

crazypython(4215) about 18 hours ago [-]

Let's collectively DDoS attack Verizon and Yahoo.

teovall(10000) about 18 hours ago [-]

Don't do this, but if you do, please wait until after the 14th.

saagarjha(10000) about 18 hours ago [-]

That would be illegal and wouldn't help regardless. They'd just shut it down earlier...

ComodoHacker(4195) about 4 hours ago [-]

Is this the future of Facebook when personal data use become heavily regulated, the data will become harder to monetize and the next big thing rize on the horizon?

Diagon(10000) about 3 hours ago [-]

Probably. Now that the groups I am concerned with are migrating to our own hosted BB, we are also planning to migrate the associated FB groups away. For that, we expect to lose data in the process.

Historical Discussions: The Great Cannon has been deployed again (December 06, 2019: 1017 points)

(1028) The Great Cannon has been deployed again

1028 points 3 days ago by robbya in 10000th position

cybersecurity.att.com | Estimated reading time – 10 minutes | comments | anchor


The Great Cannon is a distributed denial of service tool ("DDoS") that operates by injecting malicious Javascript into pages served from behind the Great Firewall. These scripts, potentially served to millions of users across the internet, hijack the users' connections to make multiple requests against the targeted site. These requests consume all the resources of the targeted site, making it unavailable:

Figure 1: Simplified diagram of how the Great Cannon operates

The Great Cannon was the subject of intense research after it was used to disrupt access to the website Github.com in 2015. Little has been seen of the Great Cannon since 2015. However, we've recently observed new attacks, which are detailed below.

Most recent attacks against LIHKG

The Great Cannon is currently attempting to take the website LIHKG offline. LIHKG has been used to organize protests in Hong Kong. Using a simple script that uses data from UrlScan.io, we identified new attacks likely starting Monday November 25th, 2019.

Websites are indirectly serving a malicious javascript file from either:

  • http://push.zhanzhang.baidu.com/push.js; or
  • http://js.passport.qihucdn.com/11.0.1.js

Normally these URLs serve standard analytics tracking scripts. However, for a certain percentage of requests, the Great Cannon swaps these on the fly with malicious code:

Figure 2: Malicious code served from the Great Cannon

The code attempts to repeatedly request the following resources, in order to overwhelm websites and prevent them from being accessible:

  • http://lihkg.com/
  • https://i.loli.net/2019/09/29/hXHglbYpykUGIJu.gif?t=
  • https://na.cx/i/XibbJAS.gif?t=
  • https://na.cx/i/UHr3Dtk.gif?t=
  • https://na.cx/i/9hjf7rg.gif?t=
  • https://na.cx/i/qKE4P2C.gif?t=
  • https://na.cx/i/0Dp4P29.gif?t=
  • https://na.cx/i/mUkDptW.gif?t=
  • https://na.cx/i/ekL74Sn.gif?t=
  • https://i.ibb.co/ZBDcP9K/LcSzXUb.gif?t=
  • https://66.media.tumblr.com/e06eda7617fb1b98cbaca0edf9a427a8/tumblr_oqrv3wHXoz1sehac7o1_540.gif?t=
  • https://na.cx/i/6hxp6x9.gif?t=
  • https://live.staticflickr.com/65535/48978420208_76b67bec15_o.gif?t=
  • https://i.lihkg.com/540/https://img.eservice-hk.net/upload/2018/08/09/181951_60e1e9bedea42535801bc785b6f48e7a.gif?t=
  • https://na.cx/i/E3sYryo.gif?t=
  • https://na.cx/i/ZbShS2F.gif?t=
  • https://na.cx/i/LBppBac.gif?t=
  • http://i.imgur.com/5qrZMPn.gif?t=
  • https://na.cx/i/J3q35jw.gif?t=
  • https://na.cx/i/QR7JjSJ.gif?t=
  • https://na.cx/i/haUzqxN.gif?t=
  • https://na.cx/i/3hS5xcW.gif?t=
  • https://na.cx/i/z340DGp.gif?t=
  • https://luna.komica.org/23/src/1573785127351.gif?t=
  • https://image.ibb.co/m10EAH/Atsps_Smd_Pc.gif?t=
  • https://img.eservice-hk.net/upload/2018/06/02/213756_d33e27ec27b054afcc911be1411b5e5a.gif?t=
  • https://media.giphy.com/media/9LZTc9dQjAAL5jmuCK/giphy.gif?t=
  • https://img.eservice-hk.net/upload/2018/06/13/171314_55de6aac9af0e3c086b83bf433493004.gif?t=
  • https://i.lih.kg/540/https://i.lihkg.com/540/

These may seem like an odd selection of websites and memes to target, however these meme images appear on the LIHKG forums so the traffic is likely intended to blend in with normal traffic. The URLs are appended to the LIHKG image proxy url (eg; https://na.cx/i/6hxp6x9.gif becomes https://i.lih.kg/540/https://na.cx/i/6hxp6x9.gif?t=6009966493) which causes LIHKG to perform the bandwidth and computationally expensive task of taking a remote image, changing its size, then serving it to the user.


It is unlikely these sites will be seriously impacted. Partly due to LIHKG sitting behind an anti-DDoS service, and partly due to some bugs in the malicious Javascript code that we won't discuss here.

Still, it is disturbing to see an attack tool with the potential power of the Great Cannon used more regularly, and again causing collateral damage to US based services.


These attacks would not be successful if the following resources were served over HTTPS instead of HTTP:

  • http://push.zhanzhang.baidu.com/push.js; or
  • http://js.passport.qihucdn.com/11.0.1.js

You may want to consider blocking these URLs when not sent over HTTPS.

Timeline of historical Great Cannon incidents

Below we have described previous Great Cannon attacks, including previous attacks against LIHKG in September 2019.

2015: GreatFire and GitHub

During the 2015 attacks, DDoS scripts were sent in response to requests sent to a number of domains, for both Javascript and HTML pages served over HTTP from behind the Great Firewall.

A number of distinct stages and targets were identified:

  • March 3 to March 6, 2015: Initial, limited test firing of the Great Cannon starts.
  • March 10: Real attacks start against a Chinese-language news site (Sinasjs.cn).
  • March 13: New attacks against an organization that monitors censorship (GreatFire.org).

Figure 3: Snippet of the code used in early Great Cannon attacks. Later scripts were improved to not require external Javascript libraries.

  • March 25: Attacks against GitHub.com start, targeting content hosted from the site GreatFire.org and a Chinese edition of the New York Times. This resulted in a global outage of the GitHub service.

Figure 4: The URLs targeted in the attack against Github.com.

  • March 26th - Attacks began using code hidden with the Javascript obfuscator "packer":

Figure 5: Snippet of the obfuscated code. Current attacks continue to use the same obfuscation.

Research by CitizenLab identified multiple likely points where the malicious code is injected. The Great Cannon operated probabilistically, injecting return packets to a certain percentage of requests for Javascript from certain IP addresses. As noted by commentators at the time, the same functionality could also be used to insert exploitation code to enable "Man-on-the-side" attacks to compromise key targets.

2017 and onward: attacks against Mingjingnews

In August 2017, Great Cannon attacks against a Chinese-language news website (Mingjingnews.com) were identified by a user on Stack Overflow. The code in the 2017 attack is significantly re-written and is largely unchanged in the attacks seen in 2019.

Figure 6: An excerpt of the code to target Mingjingnews.com in 2017.

We have continued to see attacks against Mingjingnews in the last year.

2019: Attacks against Hong Kong democracy movement

On August 31, 2019, the Great Cannon initiated an attack against a website (lihkg.com) used by members of the Hong Kong democracy movement to plan protests.

The Javascript code is very similar to the packer code used in the attacks against Mingjingnews observed in 2017 and onward, and the code was served from at least two locations:

  • http://push.zhanzhang.baidu.com/push.js
  • http://js.passport.qihucdn.com/11.0.1.js

Initial versions targeted a single page on lihkg.com.

Figure 7: The Javascript code originally targeting lihkg.com.

Later versions targeted multiple pages and attempted (unsuccessfully) to bypass DDoS mitigations that the website owners had implemented.

Figure 8: The Javascript code later targeting lihkg.com.


We detect the Great Cannon serving malicious Javascript with the following Suricata rules from AT&T Alien Labs and Emerging Threats Open.

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:'AV INFO JS File associated with Great Cannon DDoS'; flow:to_server,established; content:'GET'; http_method; content:'push.js'; http_uri; content:'push.zhanzhang.baidu.com'; http_host; flowbits:set,AVCannonDDOS; flowbits:noalert; classtype:misc-activity; sid:4001470; rev:1;)
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:'AV INFO JS File associated with Great Cannon DDoS'; flow:to_server,established; content:'GET'; http_method; content:'11.0.1.js'; http_uri; content:'js.passport.qihucdn.com'; http_host; flowbits:set,AVCannonDDOS; flowbits:noalert; classtype:misc-activity; sid:4001471; rev:1;)
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:'AV INFO Potential DDoS attempt related to Great Cannon Attacks'; flow:established,to_client; content:'200'; http_stat_code; file_data; content:'isImgComplete'; flowbits:isset,AVCannonDDOS; reference:url,otx.alienvault.com/pulse/5d6d4da02ee2b6fbff703067; classtype:policy-violation; sid:4001473; rev:1;)
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:'AV INFO JS File associated with Great Cannon DDoS'; flow:to_server,established; content:'GET'; http_method; content:'hm.js'; http_uri; content:'hm.baidu.com'; http_host; flowbits:set,AVCannonDDOS; flowbits:noalert; classtype:misc-activity; sid:4001472; rev:1;)
ET WEB_CLIENT Great Cannon DDoS JS M1 sid:2027961
ET WEB_CLIENT Great Cannon DDoS JS M2 sid:2027962
ET WEB_CLIENT Great Cannon DDoS JS M3 sid:2027963
ET WEB_CLIENT Great Cannon DDoS JS M4 sid:2027964

Additional indicators and code samples are available in the Open Threat Exchange pulse.

be_ixf;ym_201912 d_08; ct_50
  • be_ixf; php_sdk; php_sdk_1.4.26
  • https://cybersecurity.att.com/blogs/labs-research/the-great-cannon-has-been-deployed-again
  • https://cybersecurity.att.com/blogs/labs-research/the-great-cannon-has-been-deployed-again

All Comments: [-] | anchor

fortytw2(3883) 3 days ago [-]

I didn't see this anywhere in the article (maybe I missed it), but because this utilizes the Great Firewall, it's undoubtedly done by the Chinese government, right?

nradov(886) 3 days ago [-]

That's the implication but as with most cyber attacks it's impossible to really prove the source.

raverbashing(3723) 3 days ago [-]

So, what happens if the endpoints start returning data that triggers the GF?

romaaeterna(10000) 3 days ago [-]

No. 'Behind the Great Firewall' is another way of saying 'served from China'. Perhaps -- or even most likely -- it is the government. But this is hardly a smoking gun. There are plenty of people on the mainland that hate what's going on in HK, and who are not the government.

zhte415(2605) 3 days ago [-]

The first paragraph of the article mentions

> operates by injecting malicious Javascript into pages served from behind the Great Firewall. These scripts, potentially served to millions of users across the internet, hijack the users' connections to make multiple requests against the targeted site. These requests consume all the resources of the targeted site, making it unavailable:

pkilgore(10000) 3 days ago [-]

So if the cannon is created using the great firewall, how does the Chinese government establish any sort of plausible argument that this isn't state-sponsored activity?

Do they just not care?

Some day soon a war will not be started with an assassins bullet but with a tool like this. I wonder when we start looking at them the same way?

MR4D(4064) 3 days ago [-]

War seems to progress as follows:

0 - Peace 1 - Trade War 2 - Financial War 3 - Electronic War 4 - Shooting War

Note that 1 & 2 are different types of Economic war, and could be grouped together. The steps occur in order, but steps can be skipped.

From a US-centric point of view, North Korea and Iran seem to be at #3. China & Russia are at a limited version of #2.

Chinese/HK seem to be at #3 with each other.Given how invisible Electronic War can be, it's possible that they are deep in #3. It's also possible that #4 might be initially fought with HK Police forces as a proxy. Think of that as '4a'.

kradroy(10000) 3 days ago [-]

Bullets have been obsolete for decades. Wars are currently fought by selling shitty financial instruments en masse to your opponents while you sit and watch them implode from afar.

NedIsakoff(10000) 3 days ago [-]

The question is, you know I'm using it. Besides some words, what the heck are you going to do?

eznoonze(4139) 3 days ago [-]

They don't care. It is of course state sponsored. The denial is just their way to fool their own people. The Chinese Communist Party rule by lies and violence. Those are the 2 keywords to understand CCP.

emmelaich(4028) 3 days ago [-]

Pretty sure they don't care.

They're also directing lasers at helicopter pilots, which is much closer to a actual war than mere bits.


kitteh(10000) 3 days ago [-]

What's wild is how at times the GFW will be abused to profit the operators of the GFW itself. Redirecting people to sites owned by friends to drive traffic/sales, etc. Due to the nature of the GFW, there isn't a lot of auditing or transparency there. Only the Chinese carriers can generally engage them and it usually involves a visit to a specific building in Beijing (no foreigners allowed).

fred_is_fred(4148) 3 days ago [-]

Why should they care, what's anyone going to do about it?

LandR(10000) 3 days ago [-]

China is essentially carrying out humanitrian atrocities not entirely dissimilar to the holocaust and no one cares, or if they 'care', don't care to do anything about it.

So no, no one is going to do anything about them DDoSing some sites.

blackearl(10000) 3 days ago [-]

According to the article, the attacks are currently ineffective for a number of reasons, one being their js code is bugged. Imagine Gavrillo Princip's gun was prone to jamming consistently.

Spooky23(3656) 3 days ago [-]

They don't need to hide anything. This type of activity is playing off of the success of the North Koreans and Russians in neutralizing US power in the face of a completely inept and corrupt government.

The audience is other Asian and African states. The message is 'we can act with impunity'. The US will probably do some tit-for-tat exchange, but the US scope to do anything is limited due to the potential for impact on US businesses.

upofadown(4201) 3 days ago [-]

Browsers really have to be a lot more skeptical about the code they run. Running code should not be able to randomly attack any IP address on the internet. Code from non-TLS pages should not be able to run at all. Perhaps that should also apply to code loaded from 3rd party sites.

Connecting to a web page should not be consent to allow the operators of that web page to make my computer/phone do whatever they want on the net. It certainly should not be consent to delegate that power to others, either via a embedded link or a MITM attack.

greggman2(10000) 3 days ago [-]

Unfortunately there's a giant category of devices that can't serve TLS. Like pretty much every consumer router in existence that you connect to through a webpage. Someone needs to come up with a solution for that. Ideally one that works with free and open source projects and not just well funded companies.

nitwit005(10000) 3 days ago [-]

You don't need to inject scripts to make this sort of thing work. Just add img or style tags with the source set to the target you want to attack. The browser will happily go try to fetch the files from the server, adding to the request load.

You can see unintentional examples of this happening. Small sites get taken down occasionally when larger sites directly link to images or videos hosted there.

zzo38computer(10000) 3 days ago [-]

> Browsers really have to be a lot more skeptical about the code they run.

I absolutely believe you, and wrote a document how to make improvement.

> Code from non-TLS pages should not be able to run at all.

Whether or not it is TLS is irrelevant. Either way the user may wish to put their own code, and either way the server operator can change things whether or not is what the user intends. (TLS does prevent spies from adding code, but not all unwanted code is from spies.)

> Instead of locking the web down, let's give users the freedom to put on or remove as many locks as they want to live with.

I agree. Furthermore, allow the user to override any behaviour they want to do, too.

Allow the user to examine and copy the script (possibly with modifications); if the script changes (whether due to MITM or due to the author altering it or due to some other company purchasing them), it no longer runs unless the user approves the new one, too. Extensions that only allow free software to run don't help either; just because it is free software does not necessarily mean it is a program the user wants their computer to execute. Or, maybe the user wants to execute a modified version instead!

saagarjha(10000) 3 days ago [-]

> Running code should not be able to randomly attack any IP address on the internet.

How would you prevent this? What constitutes an 'attack', and how would you make sure you're not interfering with non-malicious use cases?

TheRealPomax(3909) 3 days ago [-]

This sounds like a knee-jerk reaction that doesn't take into consideration the ramifactions of the suggested policy. It won't stop DDoS attacks, because those exist _because the internet exists_ and unless you dismantle the very concept of interconnected 'everyone can reach everyone' networking, all you're doing is locking down access to more and more people until only technical experts or the people with enough money to hire those experts get to use it.

Advocate the other direction: more freedom, including the freedom to say 'thank you, browser, for being locked down by default, but I trust this website and I am okay with everything it wants to do'.

Instead of locking the web down, let's give users the freedom to put on or remove as many locks as they want to live with. And letting make mistakes with that, too: you don't make things better by taking away important life lessons, either.

ngcc_hk(4158) 3 days ago [-]

Can there be a list that come up so user who want control can see what pages the link they have selected to link implicitly. Just on the side perhaps. They can explicitly block or AI learn etc. It may have to a feature as deny or enable all is too rough to be useful.

For china need some way to handle that whole commerical-military-party all one entity.

paulddraper(4026) 3 days ago [-]

> Connecting to a web page should not be consent to allow the operators of that web page to make my computer/phone do whatever they want on the net.

But that is literally what web users want.

Everything you named is a fine opinion, but runs contrary to the wishes of the vast majority of millions and millions and millions and millions of web users.

EDIT: That said, browsers have features for users such as yourself to disable JavaScript, and there are third party extensions for finer-grained control. Again, adding these limitations is unpopular among web users.

jrockway(3433) 2 days ago [-]

TLS should be required, but it seems likely to me that the Chinese government can issue TLS certificates for MITM purposes that their browsers will trust.

As for the DoS aspect, maybe it's time to do a CORS preflight on ALL cross-origin requests, including images. (Webfonts, for whatever reason, already require a CORS preflight. Probably because Adobe is on the W3C and they sell a service where certain origins can legally use certain fonts from their servers. I hate it when user security features get turned into subsidies for large corporations, but here we are.)

Of course, if you have broken TLS I guess you can just forge the CORS response.

Edit to add: I have read more comments and better understand the attack now. China is modifying the Javascript on Chinese websites that are being viewed from outside China. Making TLS mandatory would be a big help here. China could say 'all Chinese companies must buy certificates from the Great Chinese CA' and they could still do the MITM. But with evidence of the CA issuing fake certificates to DoS websites, browsers would probably stop trusting that CA entirely. I imagine China would like to avoid that, so I feel like this would have stopped the attack.

collsni(10000) 3 days ago [-]

Tls sadly won't make a difference.

kerng(2420) 3 days ago [-]

It's called hyperlink and doesnt require any code or javascript to run. Maybe excessive requests to same IP could be throttled by a user agent.

An outbound browser firewall could helps also.

Ayesh(3856) 2 days ago [-]

Sites embedding said JS 'analytics' files could have implemented HSTS and CSP with SRI, and this attack wouldn't exist.

LifeLiverTransp(10000) 2 days ago [-]

Users should be able to restrain communication and duration of computation..

kortilla(10000) 3 days ago [-]

You do know you're suggesting that sites not be able to load assets from other sites right?

ehsankia(10000) 3 days ago [-]

Each individual user isn't doing that much, just loading an asset from another site, which is fairly inconspicuous. It's when billions of users start doing it that it becomes a problem (the first D in DDoS), but any individual person isn't doing anything out of the ordinary.

abathur(10000) 3 days ago [-]

I've wondered about this, in the years since.

Does anyone else have a sense of what (if any) pragmatic technical steps could effectively deter or neuter this tactic?

If the network can't demonstrate the ability to at least pump the brakes on this, it's hard to imagine other states or even the owners of large safe-monopoly ISPs won't get a little jealous of the tool.

hombre_fatal(10000) 3 days ago [-]

Centralize behind Cloudflare like everyone else.

revicon(3098) 3 days ago [-]

Block all traffic from China?

FDSGSG(4168) 3 days ago [-]

If baidu.com is distributing the script, why is baidu.com not being flagged as malware by the various mechanisms used to block this kind of nastiness?

Are the vendors just cowards?

CrazyStat(10000) 3 days ago [-]

baidu.com is not distributing the script. A proxy is taking advantage of unsecure connections (http) to serve the malicious script instead of baidu's script.

cryptozeus(3590) 3 days ago [-]

Mainland china gov probably has part to play in this. As someone said above they anyway have access to https root certificate so htttps is also not safe.

gok(671) 3 days ago [-]

What exactly is the rest of the world getting from allowing China access to the Internet?

saagarjha(10000) 3 days ago [-]

The ability to communicate with hundreds of millions of people in China who have nothing to do with this?

dehrmann(10000) 3 days ago [-]

AliExpress and TikTok.

erikpukinskis(3077) 3 days ago [-]

A foot in the door

pysxul(10000) 3 days ago [-]

I am still amazed by how genius of an idea this is to DDOS at large scale

FDSGSG(4168) 3 days ago [-]

The RPS isn't that great compared to some IoT botnets and this also gives the attacker rather limited control over the requests. It's a cool idea but I'm not really convinced that it's actually worth the trouble.

China has better tools, like XORDDoS.

SamuelAdams(3875) 3 days ago [-]

> These attacks would not be successful if the following resources were served over HTTPS instead of HTTP:

Can someone explain how using HTTPS would mitigate this attack?

cryptozeus(3590) 3 days ago [-]

Https is not hackable "yet" so you can't intercept the traffic in the middle. They intercepted http traffic and swapped the malicious js file in http traffic.

theptip(4093) 3 days ago [-]

HTTPS makes a MiTM attack much harder, because you need to have a valid cert for the host you are spoofing.

Steltek(10000) 3 days ago [-]

What DDoS protection are they using? AT&T didn't say other than it was present.

saagarjha(10000) 3 days ago [-]

I'd guess Cloudflare:

  $ nslookup -type=soa lihkg.com
  Non-authoritative answer:
   origin = kevin.ns.cloudflare.com
   mail addr = dns.cloudflare.com
   serial = 2032679273
   refresh = 10000
   retry = 2400
   expire = 604800
   minimum = 3600
  Authoritative answers can be found from:
yumraj(3568) 3 days ago [-]

Can/shouldn't the rest of the world create a Greater firewall to block the traffic from China?

Let China enjoy it's solitude and we'll enjoy our openness.

rahuldottech(1424) 3 days ago [-]

Yeah except we will effectively be cutting off _all_ outside information from the Cinese citizens, who already have to face incredible amounts of censorship.

Cut them off completely, and we will never find out about all the human rights violations taking place in their country, and their government will be able to brainwash its citizens even more easily.

ignoramous(3609) 3 days ago [-]

> It is unlikely these sites will be seriously impacted. Partly due to LIHKG sitting behind an anti-DDoS service, and partly due to some bugs in the malicious Javascript code that we won't discuss here.

If I get the attack scenario right, valid user IPs from behind the great firewall are driving traffic to the webservers, and so what are some examples of anti-DDoS mitigations that are effective in filtering out the adversarial traffic?

saagarjha(10000) 3 days ago [-]

Probably whatever Cloudflare uses, like JavaScript challenges.

thepete2(3724) 3 days ago [-]

It's bad that there are enough plain http connections for this to be possible.

mminer237(10000) 3 days ago [-]

Although Baidu does still default to HTTP, the Chinese government has the root certificates for every Chinese certificate authority. It can MITM traffic for anybody in China, even over HTTPS, so that wouldn't solve the problem.

crazygringo(3841) 3 days ago [-]

I'm curious: is it technically and politically possible for the operators of all internet cables receiving traffic from China to filter out malicious scripts?

AT&T's writeup says the injection is only possible because it's HTTP (not HTTPS), and that there are two specific JavaScript files which sometimes serve up the malicious code.

So in case of known malware like this being served from within a geographic region... is there any way to filter this out at scale? Or is that computationally infeasible at scale, so it would have to be built into the browser or something?

The article also doesn't make clear -- is this DDoS coming exclusively from outside of China? Or is it injecting the same malicious code inside of China as well, and they're just not bothering to distinguish between requests coming from inside or outside the country? (In which case, the DDoS will continue regardless, just not with the rest of the world's help.)

spydum(4071) 3 days ago [-]

I'm not a huge fan of anybody (china or otherwise) performing content inspection or filtering on my behalf transparently. That's just another instance of the Great Firewall with other people at the reigns. If you chose to do that at your edge network, kudos for you. Just don't force it upon me.

RockIslandLine(10000) 3 days ago [-]

Technically possible maybe, politically possible no.

Any ISP could force unencrypted traffic through a deep packet inspection system that looked for this kind of malicious behavior. That would be widely seen as a betrayal of the 'big dumb pipe' expectation.

The computation itself is not infeasible at scale. But any ISP attempting this would see swift and brutal political pushback and almost certainly lose customers over it.

gruez(3724) 3 days ago [-]

>I'm curious: is it technically and politically possible for the operators of all internet cables receiving traffic from China to filter out malicious scripts?

Considering that the halting problem is undecidable, it's impossible to filter out the malicious scripts with complete certainty. The best you can do is use blacklists/heuristics which lead to an arms race.

>So in case of known malware like this being served from within a geographic region... is there any way to filter this out at scale? Or is that computationally infeasible at scale, so it would have to be built into the browser or something?

foreign ISPs can block port80 or http requests from coming into china. sure, it's going to break a lot of sites, but it's relatively simple for any site to get unblocked - all they need to do is set up letsencrypt.

brenden2(10000) 3 days ago [-]

This is a good counter example for whenever you find yourself in an argument with anti-adblocker folks.

MisterTea(10000) 3 days ago [-]

This is a tiring example of why the web and all its technologies thoroughly suck. It's a boiling toilet fueled by greed.

Someone1234(4213) 3 days ago [-]

But these folks still have no answer for how free websites they consume daily (e.g. news) are to be funded, they don't pay, and don't want to see ads either. Yet they still expect these websites to exist.

I use Firefox's built Enhanced Tracking Prevention, that some sites call 'ad blocking' but in reality it is super easy to have ads that don't get blocked by it, just make them non-creepy.

jefftk(4162) 3 days ago [-]

This is not a good counterexample: the attacker is only able to do this because the analytics scripts are being served over HTTP. If you include the analytics on your site over HTTPS this sort of attack is not relevant.

euroclydon(3752) 3 days ago [-]

Why? The Great Cannon is served from a proxy. It can inject whatever it wants. It doesn't have to swap out ad tracker JS.

myself248(4196) 3 days ago [-]

Or 'Why should I care about security, I have nothing of value' folks.

You do have something of value: Bandwidth.

dfawcus(10000) 3 days ago [-]

That page generates no response for me, https://archive.is/I1WO6 does.

degenerate(4106) 3 days ago [-]

Thanks. For others using CTRL+F to find this link, some keywords... [archive, site down, 404, error page, mirror]

Edit: better/cleaner version: https://outline.com/8BBX3b

DyslexicAtheist(117) 3 days ago [-]

This should be mitigated by browser vendors by integrating HTTPSEverywhere as a core functionality of the browser that needs to be explicitly turned off (instead of the current state of affairs where we have a tiny minority on the web who are familiar with installing security add-ons). Visiting a HTTP site should come with a scary warning. I understand this throws old sites under the bus, but there could be other solutions here such as restricting 3d party resources as a second layer defense once the user clicks through the first warning to access the HTTP content.

and in case I'm totally wrong, what mitigations are feasible? More trade war such as by compelling ISP's to null-route Chinese businesses like Baidu.com as a form of sanction?

jiofih(10000) 3 days ago [-]

I fail to see how this attack has anything to do with http? The scripts can be served over https no problem, it's the host that is compromised. Maybe you're thinking of sub-resource integrity attributes?

flattone(10000) 3 days ago [-]

surprised the relevant powerful/time tested and highly technical participants at whichever appropriate layer of networking aren't just forcing https only. #studentquestion

DyslexicAtheist(117) 3 days ago [-]

just for good measure:

  sudo echo -e '\n\n# Null route the Great Cannon:\n0.0.0.0 baidu.com\n0.0.0.0 qihucdn.com\n' | tee /etc/hosts
... but I know I'm only fooling myself.
inimino(4175) 3 days ago [-]

I posted a top-level comment[1], but basically HTTPS-only, aside from throwing old sites under the bus, would not have helped.

[1] https://news.ycombinator.com/item?id=21726617

> and in case I'm totally wrong, what mitigations are feasible? More trade war such as by compelling ISP's to null-route Chinese businesses like Baidu.com as a form of sanction?

Probably something like this, but I'm afraid of where that would lead.

randyrand(4199) 3 days ago [-]

I think China's government requires websites to give them their private keys. HTTPS is useless then.

gruez(3724) 3 days ago [-]

>and in case I'm totally wrong, what mitigations are feasible? More trade war such as by compelling ISP's to null-route Chinese businesses like Baidu.com as a form of sanction?

A slightly less broad measure that's just as effective would be to block unencrypted http traffic from entering China. Want to get unblocked? Get letsencrypt.

A even better (but slightly greyhat) route would be to inject HSTS headers with the maximum expiry date. This will cause any visitor's browsers to get 'infected' with an unskippable warning, forcing them to upgrade no matter what.

JoshTriplett(171) 3 days ago [-]

> Visiting a HTTP site should come with a scary warning

Browsers are already moving to explicitly label HTTP sites as 'not secure'

CharlesColeman(4211) 3 days ago [-]

> This should be mitigated by browser vendors by integrating HTTPSEverywhere as a core functionality of the browser that needs to be explicitly turned off (instead of the current state of affairs where we have a tiny minority on the web who are familiar with installing security add-ons).

We're talking about China, so that's probably not going to work: Chinese users are using Chinese browsers [1] to access Chinese websites. I don't think Chinese browser-makers and website operators are going to take action against their government like that.

[1] https://www.fastcompany.com/3058432/the-top-3-web-browsers-i...

DyslexicAtheist(117) 3 days ago [-]

for anyone interested, Brian Krebs did an excellent article[1] on The Great Cannon after the Citizen Labs incident.

> [Nicholas] Weaver said the attacks from the Great Cannon don't succeed when people are browsing Chinese sites with a Web address that begins with 'https://', meaning that regular Internet users can limit their exposure to these attacks by insisting that all Internet communications are routed over 'https' versus unencrypted 'http://' connections in their browsers. A number of third-party browser plug-ins — such as https-everywhere — can help people accomplish this goal.

> But Bill Marczak, a research fellow with Citizen Lab, said relying on an always-on encryption strategy is not a foolproof counter to this attack, because plug-ins like https-everywhere will still serve regular unencrypted content when Web sites refuse to or don't offer the same content over an encrypted connection. What's more, many Web sites draw content from a variety of sources online, meaning that the Great Cannon attack could succeed merely by drawing on resources provided by online ad networks that serve ads on a variety of Web sites from a dizzying array of sources.

[1] https://krebsonsecurity.com/2015/04/dont-be-fodder-for-china...

LeftHandPath(10000) 3 days ago [-]

I recently (4 or 5 months ago) joined an online community of aircraft owners and pilots that is primarily focused around a single brand of aircraft (although it's not an official site of, property of, that brand nor is it endorsed by that brand).

When I signed up, they emailed me to welcome me to the site (they actually require manual authorization of users by an admin, which is... refreshing, but uncommon). The email ended by stating that if I lost my password, they could 'recover it' and send it back to me.

I raised a thread about it in one of their off-topic sections, and got harassed - 'How secure do you need your browsing to be?' (And hey, I mean, I was asking them to do more work)

But it stands out that most of the public doesn't know, and doesn't care to know. Even a site that's populated by people with net worths and/or incomes that average in the six-to-seven figure range, that they probably signed up for with the same email address and password that they use for their bank and brokerage accounts.

HTTP should come with a warning. Furthermore, it would be fan-fucking-tastic if there was some generalizable way to (automatically) audit a website's security practice. Like, a crawler that just runs standard OWASP-style attack-vector checks, and sends an email to the site's owners when one succeeds. And then put that data into a database and warn users (with a browser plugin) when they are creating credentials for sites with bad security.

ghostly_s(10000) 3 days ago [-]

I don't quite understand the mechanism after reading the article. Is the attacker (presumably the PRC) MITM'ing these CDN resources at the infrastructure level? If they had exploits in place within these CDNs (presumably within the PRC's capabilities) HTTPS wouldn't help, no?

nullc(1994) 3 days ago [-]

The web needs to start moving towards a strong same-origin policy for all embedded content-- require sites to proxy requests if they want third party content.

The first step could be sending CORS preflight, then requiring it, then just not allowing cross origin to different domains (but allow sub-/sibling- domains).

hinkley(4121) 3 days ago [-]

About a month ago we were discussing this and a few of us came to the conclusion that an eventually-required CORS header for cross-origin GETs would be a good thing. CDNs and SSO services could start sending this header so they can stay in business when the browsers turn off all cross-origin requests by default.

Unfortunately (from my perspective) that'll do nothing to stop third party ad tracking but you can't have everything, I suppose.

goalieca(4134) 3 days ago [-]

The problem right now is that the originating server sets an http response header. Given the MITM can modify that header.. it indicates things need to be done automagically in the browser. But that will break A LOT.

cortesoft(10000) 3 days ago [-]

Not sure how much that would help... they could just have their own domain be a cname to the target.

Your defense idea might stop layer 7 attacks, but not lower level ones.

burtonator(2084) 3 days ago [-]

Can't the CORS preflight, by itself, be a DoS?

1shooner(10000) 3 days ago [-]

How would this be different than the CNAME cloaking[1] currently being used by data collectors to circumvent ad blocking software?

1. https://news.ycombinator.com/item?id=21604825

jacquesm(43) 3 days ago [-]

So, maybe firewall off China for a couple of days? Sure, it would hurt on both sides but at least it would be clear that abuse at this scale leads to being blackholed.

mminer237(10000) 3 days ago [-]

LIHKG requires a Hong Kong ISP to register anyway, so it's not like that site blocking mainland China would hurt it at all.

burtonator(2084) 3 days ago [-]

Google has had a lot of success blacklisting domains that spam. Getting blacklisted and losing 30-90 days worth of traffic because you wanted to bump your pagerank a bit is a bit silly.

We could potentially have sanctions that require Google to block commercial sites in China. That would definitely get their attention without massive financial implications on the economy.

This type of behavior CAN NOT be allowed to continue.

weberc2(4188) 3 days ago [-]

I would rather see more rigorous trade policy. Frankly fewer low-quality or fraudulent Chinese imports will probably be a net positive and even if it is more expensive, I would rather our trade dollars support countries with less corrupt governments and better ethics with respect to intellectual property, fraud, environmental protection, etc.

I'm sure this will garner plenty of whataboutism regarding how the west is imperfect (never minding that I didn't say "the west")...

dmead(4164) 3 days ago [-]

that's what they want. a bifurcation of the internet.

yumraj(3568) 3 days ago [-]

I agree that such bad behavior should be punished, but why just couple of days? This would be similar to UN trade sanctions that are imposed on bad state actors.

I think we generally overestimate the hurt on the outside and underestimate the hurt on the inside considering the massive trade imbalance that China enjoys with the rest of the world.

Personally I have already pi-holed entire .cn and other domains.

xucheng(3399) 3 days ago [-]

This doesn't work. The DDoS requests actually come from outside China when oversea visitors are hit by the malicious js while browsing Chinese websites.

ai_ja_nai(4123) 3 days ago [-]

How about interrupting BGP traffic from/to China by nearby western AS everytime the Cannon is used?

zer00eyz(4220) 3 days ago [-]

What would happen if we black hole all of china's IP range from all over the USA?

I suspect that a lot of businesses would flex muscle on both sides to get that to stop really quickly.

It would be a hard policy to implement on our side, but likely very effective. Its almost like we need someone in power smart enough to ASK telco's and carriers to DO such a thing.

Historical Discussions: Archivists Are Trying to Make Sure LibGen Never Goes Down (December 03, 2019: 901 points)
Archivists Are Trying to Make Sure a 'Pirate Bay of Science' Never Goes Down (December 02, 2019: 3 points)

(903) Archivists Are Trying to Make Sure LibGen Never Goes Down

903 points 6 days ago by legatus in 3840th position

www.vice.com | Estimated reading time – 8 minutes | comments | anchor

TV is exhausting now. Where once the future of streaming promised to cut down bloated cable bills and create a more efficient customer-provider service—"Imagine a future where you only pay for the 10 channels you actually watch," I remember excitedly telling my parents earlier in the decade—the reality is that now, there are simply more channels that you need to pay for.

Aside from traditional cable, which remains a must for any sports fan at the absolute least, there now exist more than a half-dozen prominent streaming services (and lots more small ones), all filled with a couple of buzzy shows, some old favorites, and endless filler crap that makes the library of content seem more valuable than it is. And if keeping up with the Emmy-nominated offerings of services like Netflix, Hulu, and Amazon Prime didn't already feel like a financial strain, the launch of Apple TV+ and the fawned-over premiere of Disney+ might have done it.

By my count, if you want to watch shows on HBO, Apple TV+, Disney+, CBS All-Access, Amazon Prime, Hulu, and Netflix, it'd run you $60.93 a month or $731.16 a year, and that's before factoring in a standard cable package for live events and other shows, or the other streaming services sure to launch in the near future. (NBC's got one coming down the pike.) Of course, nobody has to pay for all these things, but the problem here is that, with the arguable exception of CBS, all of these services have at least something resembling a buzzy hit show. If you want to watch, for example, Euphoria, Dickinson, The Mandalorian, Fleabag, Killing Eve, and Stranger Things, you're going to need a lot of accounts.

With two of the largest companies in the world joining the game this month, adding their own movie stars and iconic IP to the slush pile, it's worth wondering if the streaming revolution has officially failed TV and movie fans with its endless mitosis and fragmentation. Instead of letting viewers just pay for the stuff they watch, they're forced, instead, to choose between equally flawed packages where the fun and/or high-quality shows get bundled with pointless crap that jacks up the price. Unlike Spotify and its clones, which include essentially all the music a person could want, one relatively cheap subscription to any Movie/TV streaming service doesn't give you access to more-or-less the entire history of moving pictures. And unlike Spotify and its clones, which have caused a massive downturn in music piracy, the shows on all these platforms are ripe for stealing.

Piracy has never truly died, whether it occurs when users torrent files through thepiratebay.org or 1337x.to, or download shows through Usenet, a site like MegaUpload or Rapidshare, or find cleverly hidden files on Google Docs, Facebook, or even Wikipedia. Hell, there's even a thriving network of USB drive-based piracy in some countries.

But in the era of Netflix's dominance as a legit streaming service, piracy's prevalence fell greatly. Between 2011 and 2015, Bittorrent's share of upstream traffic on North American broadband networks dropped from 52.01 percent to 26.83 percent. More recently, an EU study found that the number of young people (age 15-24) who intentionally accessed illegal content dropped from 25 percent to 21 percent from 2016 to 2019. And visits to piracy websites dropped from 206 billion in 2017 to 190 billion in 2018. But now, more paywalled content means that viewers either can't afford to pay for everything they might want to watch or don't feel like dealing with a bunch of different services. There's already evidence people are turning back to piracy: Bittorrent's traffic freefall has stopped, and has seen a recent small bounceback.

"If people have to spend more money to satisfy their movie and TV consumption needs, a large group will either consume less or look for alternatives," Ernesto van der Sar, owner of piracy trend website TorrentFreak recently told Motherboard. "A likely result is that more people will pirate on the side."

A simple glance at torrent websites shows that plenty of people are stealing from the brand new steaming services—episodes of The Mandalorian and Dickinson all have hundreds or thousands of seeders and are among the most popular shows on torrent sites. I reached out specifically to Disney, Apple, and Netflix to ask what their policy was on going after pirated content, and haven't heard back, but it's obvious that these companies assume that at least some of their viewers aren't paying the full price for their services. Given that you can watch as many as six simultaneous streams with Apple TV+, and four with Disney+ and the top Netflix package, the more common form of piracy—password sharing—is built into the system. But for pirates who don't have any access to the legit services, what makes stealing content particularly appealing in this age is that there are few if any people who face consequences for the crime.

Since the discontinuation of the "six strikes" copyright policy in 2017, there's been lax enforcement of copyright laws. Rather than going after individuals for exorbitant fines for downloading a handful of songs like copyright holders did a decade ago, enforcement these days has focused on the providers of pirated content, with the much more efficient goal of taking down entire streaming sites rather than just a few of their visitors. Of course, as the continued resilience of The Pirate Bay shows, the current strategy isn't particularly effective at stopping piracy, either. But it does mean that those who only download already-stolen content are safer than they've ever been.

And the widespread use of virtual private networks (VPNs) and the less common but more secure use of services like Tor means that people are getting better at pirating, too. Even if you're not breaking the law, you should pay for a VPN, because there's reason to suspect ISPs are monetizing your browsing data. But having some form of VPN is non-negotiable if you're downloading content illegally. Typically, if a pirate is using a torrent program to download files peer to peer, their IP address is visible to anyone using the program. (That's how copyright holders could track down pirates and take them to court.) But with a VPN, a pirate's data gets laundered through the location of the private network—meaning someone in New York could have a public IP address showing Chicago, Toronto, or anywhere in the world. If a pirate is careful, it's much harder to know who they are.

None of this is to say you should steal—it's illegal! But whether you're on the business side spending millions of dollars on new shows, or you're just a girl who likes to watch hot people punch each other for a few hours every week, it's clear that companies are overwhelming customers with products, and a breaking point is coming where people won't be able to pay for all of it. As more and more streaming services launch, each with their own content walled off from the others, it'd be ignorant and naive to think that piracy won't increase with it.

All Comments: [-] | anchor

Tepix(3968) 6 days ago [-]

Related: Looking at harddisk cost per terabyte, quite often extern drives are cheaper than internal ones.

For example right now in Germany I can get a WD 8TB USB 3.0 drive for 135€ but the cheapest internal 8TB drive costs 169€.

Any idea why? It's puzzling.

weinzierl(376) 6 days ago [-]

For me on amazon.de the WD 8TB USB 3.0 drive is currently at EUR 159.99. Where do you get it for EUR 135?

walrus01(1957) 6 days ago [-]

It is very common these days to buy the WD 8TB, 10TB and 12TB external USB3 hard drives and remove their cases, and put them in some sort of home built file server or NAS. There's a technique to put a thin section of kapton tape on one of the SATA pins so that they will power up from ordinary PC/ATX type power supplies with regular SATA power connectors.


In large ZFS arrays, many people are using them with great success, at no greater or lesser annual failure rate than the expensive enterprise hard drives.

LameRubberDucky(10000) 6 days ago [-]

I noticed this yesterday while shopping for cyber Monday deals. If you want to load up a server with drives, perhaps the external drives can be removed from their cases and used internally?

driverdan(1482) 6 days ago [-]

It has been like this for years. You'll often see people refer to 'shucking' them, taking the drives out to use in a NAS.

cr0sh(10000) 6 days ago [-]

My best guess would be that more people buy external drives than internal, and those that 'manufacture' external drives (ie - buy internal drives and repackage them) purchase in larger volumes than those that sell bare internal drives.

Of course, that wouldn't explain the difference between a WD external drive and that same drive as an internal drive - assuming that WD actually manufactures both (and doesn't just license the name provided the 3rd party uses their drives)...

burtonator(2084) 6 days ago [-]

What's interesting is that 32TB is becoming more and more affordable and the research material is roughly staying about the same size.

That might change though as people start including video + data within papers and have new notebook formats that are live and contain docker containers/ipython, etc.

It's a shame we can't just mail these around.

asdff(10000) 6 days ago [-]

When people publish data it's typically uploaded to a public repository anyway. Supplementary videos are a thing, but in my field at least they generally stay in the supplementary and aren't the raw data so file sizes are reasonable, while still images are used in the text. Journals are still printed works first, believe it or not.

jbverschoor(3393) 6 days ago [-]

You can buy 48TB (4x12TB) for €1000. Store some index on an SSD, and you have another full node.

izzydata(10000) 6 days ago [-]

The bandwidth to upload to people can get expensive depending on where you live. Most home connections don't have bi-directional fiber so you are stuck with crippling amounts of upload bandwidth.

mister_hn(10000) 6 days ago [-]

One could use FAANG data centers to host them for free, it would be really great

woofcat(10000) 6 days ago [-]

Look at the google books project. That got shutdown real hard due to copyright issues and litigation after they invested a ton of money in digitizing some of the most valuable library collections in the world.

dooglius(10000) 6 days ago [-]

There is a huge amount of duplication there (i.e. books that have many scans), I wonder if it would be better to tackle that versus doing a straight backup.

Invictus0(4188) 6 days ago [-]

I think the duplication issue is probably overstated. I doubt tackling that would shave off more than 20% of the total backup size.

Mediterraneo10(10000) 6 days ago [-]

This is a downside of Libgen: duplicate uploads, missing or erroneous metadata. You start wishing that there was at least some curation of the collection, so it could approach the quality of an academic library catalogue as many users are usedto. But I guess the people behind Libgen want to keep the number of people with database edit rights small. (When you upload a book, you yourself can edit the metadata for that book for 24 hours, but you cannot go through the rest of LibGen's database and make corrections.)

legatus(3840) 6 days ago [-]

There are groups behind data curation as well, though it is much harder. LibGen sees an addition rate of about 230 GBs per month, while SciMag's is around 1.10 TBs per month. We should expect those numbers to increase in the future. The man-hours required to curate those database may very well cost much more than the storage and bandwidth required to store duplicates and incorrectly tagged files. In any case, as I said, there are people seriously interested in curating the LibGen database, though most efforts I know of are still in the earliest stages.

Avamander(10000) 6 days ago [-]

Why not publish the site over IPFS, that would make P2P hosting much simpler?

skjoldr(10000) 6 days ago [-]

How about Tahoe-LAFS? I haven't used it, but it should be stable by now.

There's also ZeroNet, though IDK if it can handle the traffic.

traverseda(3134) 6 days ago [-]

In my experience ipfs doesn't actually work. I'd love to be proven wrong, but the reason why nobody uses ipfs even when it seems like a great fit is bect it's not really usable.

legatus(3840) 6 days ago [-]

Currently (at least for the-eye) it's about IPFS's barrier of entry. I expect LibGen's case to be similar. Most people don't know about it, and if even those that knew about it had to learn how IPFS works etc, they would probably just try to find the book they're looking for elsewhere.

miki123211(3930) 6 days ago [-]

The new architecture of pirate sites, what I call the Hydra architecture, seems pretty interesting to me. There isn't a single site hosting the content, but a group of mirrors freely exchanging data between one another. In case some of them go down, the other ones still remain and new ones can appear, copying data from the remaining mirrors. This is like a hydra that grows two heads every time you chop one off. It's absolutely unkillable, as there's no single group or server to sue.

A more advanced version of this architecture is used by pirate addons for the Kodi media center software. Basically, you have a bunch of completely legal and above board services like Imdb that contain video metadata. They provide the search results, the artworks, the plot descriptions, episode lists for TV shows etc. Impossible to sue and shut down, as they're legal. Then, you have a large number of illegal services that, essentially, map IDs from websites like IMDB to links. Those links lead to websites like Openload, which let you host videos. They're in the gray area, if they comply with DMCA requests and are in a reasonably safe jurisdiction, they're unlikely to be shut down. On the Kodi side, you have a bunch of addons. There are the legitimate ones that access IMDB and give you the IDs, the not that legitimate ones that map IDs to URLs, and the half-legitimate ones that can actually play stuff ron those URLS (not an easy taks, as websites usually try to prevent you from playing something without seeing their ads). Those addons are distributed as libraries, and are used as dependencies by user-friendly frontends. Those frontends usually depend on several addons in each category, so, in case one goes down, all the other ones still remain. It's all so decentralized and ownerless that there's no single point of failure. The best you can do is killing the frontend addon, but it's easy to make a new one, and users are used to switching them every few months.

jackcodes(10000) 6 days ago [-]


rollinDyno(3962) 6 days ago [-]

I worry that if this system becames permanent, one in which it is practically impossible to stop piracy, followed by the loss of traditional incentives we might find ourselves in a place where no motivated investor will break even when producing quality and innocuous content.

bobongo(10000) 6 days ago [-]

> there's no single point of failure. The best you can do is killing the frontend addon

Single decentralized service, providing access to all content, national and international, free of DRM, for all platforms, for a proper, fair, and non-monopolist price.

That will pull all the users who are willing to pay for content over to the paid service, and those who remained were not willing to pay regardless of what you did anyhow.

MadWombat(10000) 6 days ago [-]

> It's absolutely unkillable

Just like any other distributed system, this is vulnerable to organized take downs and scare tactics. There was a whole bunch of mirrors of Pirate Bay, yet once most of Europe's legal systems adopted the 'sharing is theft' mindset, it became pretty much impossible to find one.

buboard(3489) 6 days ago [-]

one of the next interplanetary or Interstellar Probe should carry a copy of the sci-hub torrent in some kind of permanent storage

walrus01(1957) 6 days ago [-]

there is no need to put a data storage archive on something shot out into interstellar space. geostationary telecommunications satellites are at a sufficiently high enough orbit that they will likely outlast human civilization. We could destroy ourselves with nuclear war, regress to a stone age level of technology, rediscover spaceflight and go find them long before the orbits of any of them decay.

claudiawerner(4221) 6 days ago [-]

This is an interesting idea because there's a lot of radical political and philosophical publications on there. Brill's Historical Materialsim book series is on there almost in its entirety.

saalweachter(3746) 6 days ago [-]

Do we have anything rated for a few millennia of interstellar radiation besides etched gold plates?

Someone1234(4213) 6 days ago [-]

Storing that amount of information in a way that an unknown alien species would be able to read (even assuming technical expertise greater than our own) is a huge problem.

Keep in mind that they don't know our written or computational language and there's nothing about our technology that is inherently self-explaining/obvious.

Even the assumption that they'd use binary computers (rather than trinary, or other technology not based around electrical voltages) is open to debate.

fghtr(3387) 6 days ago [-]

Are there any i2p torrents? I guess anonymity might be helpful if I want to mirror/seed this data...

zozbot234(10000) 6 days ago [-]

I assume anyone could simply seed the 'official' torrents via i2p? Not sure how that system actually works, it's interesting for sure but a lot less well-known than the alternatives.

sanxiyn(2820) 6 days ago [-]

Yongle Encyclopedia was a similar project of the 15th century China. It was the largest encyclopedia in the world for 600 years until surpassed by Wikipedia.

Alas, Yongle Encyclopedia is almost completely lost now. Archiving is harder than you think.


weinzierl(376) 6 days ago [-]

I read the Wikipedia article about it and the sad thing is that the majority of the Yongle Encyclopedia seem to have been destroyed only in quite recent times.

8bitsrule(4012) 6 days ago [-]

WP says that it was never printed for the general public. Hmmm. Had it been (parts duplicated, say, at hundreds of sites), most of it would probably have survived.

nullifidian(3890) 6 days ago [-]

Posting that here only creates problems for them. The more it's known in the west the more likely it will go down.

coffee12345(10000) 6 days ago [-]

+1, bookwarrior has warned about this.

legatus(3840) 6 days ago [-]

This is an extremely important effort. The LibGen archive contains around 32 TBs of books (by far the most common being scientific books and textbooks, with a healthy dose of non-STEM). The SciMag archive, backing up Sci-Hub, clocks in at around 67 TBs [0]. This is invaluable data that should not be lost. If you want to contribute, here's a few ways to do so.

If you wish to donate bandwidth or storage, I personally know of at least a few mirroring efforts. Please get in touch with me over at legatusR(at)protonmail(dot)com and I can help direct you towards those behind this effort.

If you don't have storage or bandwidth available, you can still help. Bookwarrior has requested help [1] in developing an HTTP-based decentralizing mechanism for LibGen's various forks. Those with experience in software may help make sure those invaluable archives are never lost.

Another way of contributing is by donating bitcoin, as both LibGen [2] and The-Eye [3] accept donations.

Lastly, you can always contribute books. If you buy a textbook or book, consider uploading it (and scanning it, should it be a physical book) in case it isn't already present in the database.

In any case, this effort has a noble goal, and I believe people of this community can contribute.

P.S. The 'Pirate Bay of Science' is actually LibGen, and I favor a title change (I posted it this way as to comply with HN guidelines).


[1] https://imgur.com/a/gmLB5pm

[2] bitcoin:12hQANsSHXxyPPgkhoBMSyHpXmzgVbdDGd?label=libgen, as found at, listed in https://it.wikipedia.org/wiki/Library_Genesis

[3] Bitcoin address 3Mem5B2o3Qd2zAWEthJxUH28f7itbRttxM, as found in https://the-eye.eu/donate/. You can also buy merchandising from them at https://56k.pizza/.

canuckintime(10000) 6 days ago [-]

> Lastly, you can always contribute books. If you buy a textbook or book, consider uploading it (and scanning it, should it be a physical book) in case it isn't already present in the database.

There's no easy solution for scanning physical books, is there?

eej2ya1K(10000) 6 days ago [-]

Is it an important effort, though? If the LibGen archive disappeared, I doubt my life would change in any meaningful way.

I'd love to be proven wrong.

0xdeadbeefbabe(3488) 6 days ago [-]

I guess it's stunningly obvious to everyone else, but how are you certain the replacement isn't worse than the original system. I already see comments about the curation problem, for example. What's the point in making bad information (duplicate information etc.) highly available? Why put so much faith in this donation strategy i.e. donating bandwidth or donating money?

dewey(1273) 6 days ago [-]

I just read the article and your comments here and I'm a bit unsure what's the difference to the Internet Archive. Is it that the IA can archive them but not make them public for legal reasons and The-Eye is more focused on keeping them online and accessible no matter what?

oefrha(4042) 6 days ago [-]

Sounds like anyone with a seed box could donate some bandwidth and storage by leeching then seeding part of it? It would be nice if there's a list of seeder/leecher counts (like TPB) or better yet of priority list of parts that need more seeders.

Edit: Found the other comment where you link to the seeding stats: https://docs.google.com/spreadsheets/d/1hqT7dVe8u09eatT93V2x...

guidoism(4187) 6 days ago [-]

For important archives like this maybe we need some sort of turn-key solution for the masses? Like a Raspberry Pi image that maintains a partial mirror. Imagine if one could by a RPi and external HD, burn the image, and connect it to some random wifi network (at home, at work, at the library, etc).

whydoyoucare(10000) 6 days ago [-]

Isn't scanning a physical book and uploading a soft-copy, a landmine of hazards (both legal and moral)? Essentially you are encouraging (some) unlawful activity... I am not so sure I am onboard with this idea!

guidoism(4187) 6 days ago [-]

It's easy to take this stance in a rich country. But what about the people in countries where one of these books cost the equivalent of a year's wages. Not so black and white eh?

voldacar(4215) 6 days ago [-]

Is there a way to just download the whole 32TB to your own machine? I see a ton of mirrors but the content seems to be highly fragmented between them

legatus(3840) 6 days ago [-]

There are ways to do so. The archive is made up of many, many torrents (I believe it's a monthly if not biweekly update of the database). If you have the storage/bandwidth availability for the whole 32TBs, please get in touch and I may be able to help you get the whole deal without too much hassle. Otherwise, just pick some torrents (it would be best to pick them based on torrent health, but they are so many to check manually) and try to keep seeding as much as possible.

EDIT: To find libgen's torrents health, check out this google sheet: https://docs.google.com/spreadsheets/d/1hqT7dVe8u09eatT93V2x...

Thanks frgtpsswrdlame for the heads up.

knzhou(10000) 6 days ago [-]

Libgen is one of the greatest contributors to scientific productivity worldwide, possibly beaten only by Sci-Hub. Just about everybody in academia knows about it. If it ever vanished, some of us could probably still get by trading files from person to person, but nothing could be as perfect as what we got now.

bscphil(3991) 6 days ago [-]

> Just about everybody in academia knows about it

Just about everybody in academia uses it, too, especially in the case of Scihub. I can't imagine taking the time to actually check whether I have access to some journal when I want to read a paper, let alone jump through all the hoops before you can get a PDF. The first thing we did when my partner's paper was recently published was check to see if it was on Scihub yet. (It was!)

lioeters(4009) 6 days ago [-]

> possibly beaten only by Sci-Hub

Today I learned that Library Genesis is actually 'powered by Sci-Hub' as its primary source.

So I guess they're sister projects by similarly minded people (who seem to be mostly/originally based in Slavic countries, which I find interesting culturally - perhaps it's due to a looser legal environment + activist academics?).

> Just about everybody in academia knows about it.

That really says something about the state of society, this tension between copyright laws (and the motivations behind them) and the intellectual ideal of free and open access to knowledge.

asdff(10000) 6 days ago [-]

It's saved me probably 3 grand over the course of college. I would have had to take on debt otherwise to pass my courses.

turc1656(10000) 6 days ago [-]

I don't see anyone having mentioned the possibility of posting this data to Usenet at all - at minimum for archival purposes which should be good for ~8-9 years. That way at least the data isn't lost. With so many of those torrents have 0 or 1 seed, this is a serious risk I think, despite the comments elsewhere about people rotating what they seed.

I realize that doesn't solve the access problem for most people as most of the users who need this research might not know how to use usenet or even be familiar with it at all, but I think the first major concern would be to secure the entire repository on a stable network. Usenet seems like a good place for that even if it doesn't serves as a means of distribution. Encrypting the uploads would make them immune to DMCA takedowns provided that the decryption keys weren't made public and were only shared with individuals related to the maintenance of the LibGen project.

walrus01(1957) 6 days ago [-]

Two thoughts on that. Encoding it to a text format with CRC data for posting to usenet is highly inefficient in terms of data storage. And 33TB of stuff is not going to be retained for 8-9 years, the last I checked due to the huge volume of binaries traffic, the major commercial usenet feed providers have at most 6-9 months of retention for the major binary groups. Beyond that it becomes cost prohibitive for them in terms of disk storage requirements. This is not an issue for the majority of their customers, 6-9 months is more than long enough retention to go find a 40GB 2160p copy of some recently-released-on-bluray movie.

EthanHeilman(3253) 6 days ago [-]

Maybe we should print this out on acid-free paper-thin flexible wood-pulp sheets stitched to together to form linear organized aggregations. Each aggregation would contain one or more works and be searchable using a SQL-like database. To make this plan really work there would need to be a collection of geographically distributed long term physical repositories that would receive periodic updates as new material became available.

All joking aside, I do wonder wither digital or analogue formats are better able to survive into the distant future.

* What impact will DRM have on the accessibility of our knowledge to future historians?

* Is anything recoverable from a harddrive or flash media after 500 years in a landfill?

* Will compressed files be more of less recoverable? What about git archives?

* Will the future know the shape of our plastic GI Joes toys but not the content of the GI Joes cartoon?

qmmmur(10000) 6 days ago [-]

Pretty much everyone in a tech job could afford to buy 40TB of storage at home, or remotely and mirror the entire repo. I think that given this low barrier of entry if you can afford to help preserve the information then you can and probably should. Even if a small amount do it it's more points of recovery.

TheOtherHobbes(4214) 6 days ago [-]

This is not a solvable problem without technological continuity, or some unimaginably smart technology we can't imagine today.

If you found a mysterious archive object and had no idea what it was - CD-R, hard drive, SSD, whatever - not only would you have to reinvent an entire hardware reader around it, you would also have to work out the file structure, extract the data (some of which could be damaged), and reverse engineer the container file formats and the data structures inside them.

If you got all of that right, you'd eventually be able to start trying to translate the content of the text, audio, images, videos (how many compression formats are there?) into something you could understand.

A much more advanced civilisation would struggle with making a cold start on all of that. In our current state, we'd get nowhere if we didn't already have some records explaining where to begin.

unicornporn(2737) 6 days ago [-]

In the GLAM sector the LOCKSS[1] is project is quite well-known. It tries to deal with some of the resiliency problems that is inherent in digital preservation. However, I'd guess this system does not offer the needed anonymity.

[1] https://www.lockss.org/ ; https://en.wikipedia.org/wiki/LOCKSS

frobozz(10000) 6 days ago [-]

> I do wonder wither digital or analogue formats are better able to survive into the distant future.

There are 5000 year old clay tablets we can still read.

There are centuries old documents on paper, vellum etc. that we can still read.

I personally have decades-old paper documents I can easily read, and a box of floppies I can't.

It's not just a problem of unreadable physical media, I have a database file on a perfectly readable HD that was generated by an application that is no longer available. I might be able to interrogate it somehow, but it won't be easy.

Digital formats and connectivity make LOCKSS easier, so that's a plus. There's less chance of a fire or flood or space-limited librarian destroying the last known copy. However, without archivists actively transforming content to new formats as required, it might only take a few decades before a lot of content starts to require a massive effort to read.

IHLayman(10000) 5 days ago [-]

Forget DRM, even future Engilsh may be incomprehensible. There is an entire field of study dedicated to finding a way to make our future voice heard, without a good plausible solution, called Nuclear Semiotics (https://en.wikipedia.org/wiki/Nuclear_semiotics).

If we can't effectively warn a future (>10,000 years) generation to stay away from something that may harm or kill them, what chance do we have of making a universally understandable archive of data?

Historical Discussions: Fight back against Google AMP (2018) (December 04, 2019: 891 points)
Google wants websites to adopt AMP as the default for building webpages (September 05, 2018: 532 points)

(894) Fight back against Google AMP (2018)

894 points 5 days ago by mancerayder in 2073rd position

www.polemicdigital.com | Estimated reading time – 10 minutes | comments | anchor

Let's talk about Accelerated Mobile Pages, or AMP for short. AMP is a Google pet project that purports to be "an open-source initiative aiming to make the web better for all". While there is a lot of emphasis on the official AMP site about its open source nature, the fact is that over 90% of contributions to this project come from Google employees, and it was initiated by Google. So let's be real: AMP is a Google project.

Google is also the reason AMP sees any kind of adoption at all. Basically, Google has forced websites – specifically news publishers – to create AMP versions of their articles. For publishers, AMP is not optional; without AMP, a publisher's articles will be extremely unlikely to appear in the Top Stories carousel on mobile search in Google.

And due to the popularity of mobile search compared to desktop search, visibility in Google's mobile search results is a must for publishers that want to survive in this era of diminishing revenue and fierce online competition for eyeballs.

If publishers had a choice, they'd ignore AMP entirely. It already takes a lot of resources to keep a news site running smoothly and performing well. AMP adds the extra burden of creating separate AMP versions of articles, and keeping these articles compliant with the ever-evolving standard.

So AMP is being kept alive artificially. AMP survives not because of its merits as a project, but because Google forces websites to either adopt AMP or forego large amounts of potential traffic.

And Google is not satisfied with that. No, Google wants more from AMP. A lot more.

Search Console Messages

Yesterday some of my publishing clients received these messages from Google Search Console:

Take a good look at those messages. A very good look. These are the issues that Google sees with the AMP versions of these websites:

"The AMP page is missing all navigational features present in the canonical page, such as a table of contents and/or hamburger menu."

"The canonical page allows users to view and add comments, but the AMP article does not. This is often considered missing content by users."

"The canonical URL allows users to share content directly to diverse social media platforms. This feature is missing on the AMP page."

"The canonical page contains a media carousel that is missing or broken in the AMP version of the page."

Basically, any difference between the AMP version and the regular version of a page is seen as a problem that needs to be fixed. Google wants the AMP version to be 100% identical to the canonical version of the page.

Yet due to the restrictive nature of AMP, putting these features in to an article's AMP version is not easy. It requires a lot of development resources to make this happen and appease Google. It basically means developers have to do all the work they already put in to building the normal version of the site all over again specifically for the AMP version.

Canonical AMP

The underlying message is clear: Google wants full equivalency between AMP and canonical URL. Every element that is present on a website's regular version should also be present on its AMP version: every navigation item, every social media sharing button, every comment box, every image gallery.

Google wants publishers' AMP version to look, feel, and behave exactly like the regular version of the website.

What is the easiest, most cost-efficient, least problematic method of doing this? Yes, you guessed it – just build your entire site in AMP. Rather than create two separate versions of your site, why not just build the whole site in AMP and so drastically reduce the cost of keeping your site up and running?

Google doesn't quite come out and say this explicitly, but they've been hinting at it for quite a while. It was part of the discussion at AMP Conf 2018 in Amsterdam, and these latest Search Console messages are not-so-subtle hints at publishers: fully embracing AMP as the default front-end codebase for their websites is the path of least resistance.

That's what Google wants. They want websites to become fully AMP, every page AMP compliant and adhering to the limitations of the AMP standard.

The Google-Shaped Web

The web is a messy, complicated place. Since the web's inception developers have played loose and fast with official standards, and web browsers like Netscape and Internet Explorer added to this mess by introducing their own unofficial technologies to help advance the web's capabilities.

The end result is an enormously diverse and anarchic free-for-all where almost no two websites use the same code. It's extremely rare to find websites that look good, have great functionality, and are fully W3C compliant.

For a search engine like Google, whose entire premise is based on understanding what people have published on the web, this is a huge challenge. Google's crawlers and indexers have to be very forgiving and process a lot of junk to be able to find and index content on the web. And as the web continues to evolve and becomes more complex, Google struggles more and more with this.

For years Google has been nudging webmasters to create better websites – 'better' meaning 'easier for Google to understand'. Technologies like XML sitemaps and schema.org structured data are strongly supported by Google because they make the search engine's life easier.

Other initiatives like disavow files and rel=nofollow help Google keep its link graph clean and free from egregious spam. All the articles published on Google's developer website are intended to ensure the chaotic, messy web becomes more like a clean, easy-to-understand web. In other words, a Google-shaped web. This is a battle Google has been fighting for decades.

And the latest weapon in Google's arsenal is AMP.

Websites built entirely in AMP are a total wet dream for Google. AMP pages are fast to load (so fast to crawl), easy to understand (thanks to mandatory structured data), and devoid of any unwanted clutter or mess (as that breaks the standard).

An AMPified web makes Google's life so much easier. They would no longer struggle to crawl and index websites, they would require significantly less effort to extract meaningful content from webpages, and would enable them to rank the best possible pages in any given search result.

Moreover, AMP allows Google to basically take over hosting the web as well. The Google AMP Cache will serve AMP pages instead of a website's own hosting environment, and also allow Google to perform their own optimisations to further enhance user experience.

As a side benefit, it also allows Google full control over content monetisation. No more rogue ad networks, no more malicious ads, all monetisation approved and regulated by Google. If anything happens that falls outside of the AMP standard's restrictions, the page in question simply becomes AMP-invalid and is ejected from the AMP cache – and subsequently from Google's results. At that point the page might as well not exist any more.

Neat. Tidy. Homogenous. Google-shaped.

Dance, Dance for Google

Is this what we want? Should we just succumb to Google's desires and embrace AMP, hand over control of our websites and content to Google? Yes, we'd be beholden to what Google deems is acceptable and publishable, but at least we'll get to share in the spoils. Google makes so much money, plenty of companies would be happy feeding off the crumbs that fall from Google's richly laden table.

It would be easy, wouldn't it? Just do what Google tells you to. Stop struggling with tough decisions, just let go of the reins and dance to Google's fiddle. Dance, dance like your company's life depends on it. Because it does.

You know what I say to that? No.

Google can go to hell.

Who are they to decide how the web should work? They didn't invent it, they didn't popularise it – they got filthy rich off of it, and think that gives them the right to tell the web what to do. "Don't wear that dress," Google is saying, "it makes you look cheap. Wear this instead, nice and prim and tidy."

F#&! you Google, and f#&! the AMP horse you rode in on.

This is the World Wide Web – not the Google Wide Web. We will do as we damn well please. It's not our job to please Google and make our websites nice for them. No, they got this the wrong way round – it's their job to make sense of our websites, because without us Google wouldn't exist.

Google has built their entire empire on the backs of other people's effort. People use Google to find content on the web. Google is just a doorman, not the destination. Yet the search engine has epic delusions of grandeur and has started to believe they are the destination, that they are the gatekeepers of the web, that they should dictate how the web evolves.

Take your dirty paws off our web, Google. It's not your plaything, it belongs to everyone.

Fight Back

Some of my clients will ask me what to do with those messages. I will tell them to delete them. Ignore Google's nudging, pay no heed.

Google is going to keep pushing. I expect those messages to turn in to warnings, and eventually become full-fledged errors that invalidate the AMP standard.

Google wants a cleaner, tidier, less diverse web, and they will use every weapon at their disposal to accomplish that. Canonical AMP is just one of those weapons, and they have plenty more. Their partnership with the web's most popular CMS, for example, is worth keeping an eye on.

The easy thing to do is to simply obey. Do what Google says. Accept their proclamations and jump when they tell you to.

Or you could fight back. You could tell them to stuff it, and find ways to undermine their dominance. Use a different search engine, and convince your friends and family to do the same. Write to your elected officials and ask them to investigate Google's monopoly. Stop using the Chrome browser. Ditch your Android phone. Turn off Google's tracking of your every move.

And, for goodness sake, disable AMP on your website.

Don't feed the monster – fight it.

Mobile, News SEO, Technical

All Comments: [-] | anchor

ars(3058) 5 days ago [-]

I totally understand the issue raised by this article.

But I have a problem: I like AMP because it is so fast!

Publishers have done this to themselves by loading up simple web pages with hundreds of scripts that slow it down to almost unusable levels.

Maybe if publishers stopped doing that, then you could start criticizing AMP.

'Some of my clients will ask me what to do with those messages. I will tell them to delete them. Ignore Google's nudging, pay no heed.'

And then your clients will be unhappy with the traffic results. Are you going to also tell your clients the drawbacks?

RonanTheGrey(3738) 5 days ago [-]

It's kind of a 'pay no attention to the man behind the curtain' type of thing.

Google had the option of simply highly ranking pages that are performant and lightweight on mobile. They didn't take that option, instead they went with AMP. There's intent there. I'm not going to speculate on the intent, because it isn't relevant: the point is that because there was a simpler technical solution available that they didn't take, there is clearly intent in going the harder path. It's worth asking what that means.

AMP provides all the benefits that a well-built, well-architected site would, and that would have been the 'non-evil' option. Use web standards and push people to respect them, imagine that...

misterdoubt(10000) 5 days ago [-]

The people criticizing AMP are not necessarily the same people 'loading up simple web pages with hundreds of scripts.'

tzs(3300) 5 days ago [-]

I have a page that I put on the web about 13 years ago, that somehow became the top hit for 'how to <X>' for a particular <X>. I have never done anything to promote the page, and do not have any ads on it.

A few years ago, it lost the #1 spot, but is still on the first page.

(I'm specifically not saying what <X> is, or what domain the page is on, because I do not want to do anything that might get people to go there. I want to see how long a simple page with no promotion and no ads can stay on the front page of search).

I just noticed that Google says in the search results 'Your page is not mobile-friendly'. Would that be because I don't have an AMP version of the page?

Clicking on that notice goes to a Google tool that analyzes the page for mobile friendliness...which tells me that the page IS mobile friendly, so I'm a bit confused.

The page is very simple. Just some paragraphs of text, some h1 and h2 headings, a few tables, and a couple of lists. The only CSS is on the page itself via a <style> tag, and only has one rule, setting the width of th to a certain number of px.

wolfgang42(3505) 5 days ago [-]

You can definitely have pages that Google considers 'mobile-friendly' without using AMP. Do you have a viewport meta tag on the page? If you have any CSS at all some mobile browsers will default to a desktop view that requires zooming to read, and that's probably what Google is detecting.

Regarding the analyzer tool, I suspect this is a case of Google's left hand not knowing what the right hand is doing. As I recall, Google is in the process of switching from a static analyzer to a Chrome Lighthouse-based analyzer (or something to that effect) and I'd guess that the mobile-friendly rules are slightly different between the two.

scblock(10000) 5 days ago [-]

I strongly agree. Web developers and app designers should work to build fast, performant web sites that use bandwidth carefully because that's good for end users. But the entire AMP approach to doing this is questionable, and as we have seen over the years it appears to act more like a way to give Google more undeserved and unnecessary control over what should be an open web.

More broadly, I consider this yet another reason to avoid using Google properties where possible. They have shown themselves to be bullies and bad actors who want to control the internet and oppose an open web. I recommend simply not building AMP pages at all, but instead working to build high quality, performant websites which gracefully handle device size changes and lack of javascript.

kypro(4168) 5 days ago [-]

> I recommend simply not building AMP pages at all, but instead working to build high quality, performant websites which gracefully handle device size changes and lack of javascript.

Doesn't matter. Google will penalise against not AMP sites. Let's not pretend there's a choice if you want people to find your content.

throwthisaway2(10000) 5 days ago [-]

Performance has not been a concern for about 10 years. Its all about dev speed. The amount of boilerplate loaded is wasting so much energy across the world. Our libraries are becoming more bloated.

mrb(337) 5 days ago [-]

«the entire AMP approach to doing this is questionable»

Why? AMP is roughly speaking a subset of HTML that's somewhat easier to cache, and nothing more. Ideally it should be possible and encouraged to serve most webpages from a cache, to optimize Internet traffic on the global scale. It should be okay to fetch them from a cache without breaking anything. I don't see why the AMP Cache is hated so much. Publishers shouldn't care whether browsers hit their servers or some third-party cache, as long as they can have proper analytics. And guess what? AMP does provide a way to do proper analytics. You can even send analytics data to an in-house URL: https://amp.dev/documentation/components/amp-analytics/#send... I think most of the hate against AMP in unjustified. Any search engine could decide to cache AMP content.[1] AMP in and of itself doesn't give search engines 'more control' over the web (whatever that means), it just makes the web easier to cache for everyone, all search engines, all end-users.

Edit: [1] not only Google caches it, Bing does it too: https://blogs.bing.com/Webmaster-Blog/September-2018/Introdu...

wrycoder(4200) 5 days ago [-]

Yes, it's one thing to promote a cleaner and faster web though better design and implementation. It's another thing for Google to use its effectively monopoly power to enforce that. As the FA says, Google didn't invent the web or create its content - what gives them the moral right to take it over?

I think the collective web will eventually fix the problems without Google.

The root of the AMP issue is placement in Google's search engine. Personally, I use DDG, and would be willing to pay a sizable subscription fee to keep it from being more like Google or from being acquired. But, most people probably would not - they are used to the web being "free".

This is just another "embrace, extend, extinguish" effort, like the ones we have seen in the past. These attacks are transparently self-serving and should be "routed around". It will require commitment to do so!

z3t4(3937) 5 days ago [-]

If AMP somehow manages to sell quality and performance, (whether you use AMP or not), that's mission accomplished!

izacus(10000) 5 days ago [-]

> I strongly agree. Web developers and app designers should work to build fast, performant web sites that use bandwidth carefully because that's good for end users.

They should, but they didn't. Before AMP most of the web was unusable on slower Android phones and frontenders just laughed at you and told you to drop 800$ on an iPhone if you want to see their pages. Is it a surprise that Google shoved a technology to fix web on their platform down developers throats?

Nothing else before AMP helped. Why do you think those developers will suddenly wake up and start building lightweight web pages now? Instead of ad bloated, video playing monstrosities?

Web developers were slothful. This is how purgatory looks like. ;)

alexfromapex(10000) 5 days ago [-]

Agreed, it is yet another Google data collection method created under the guise of a beneficial offering

3xblah(10000) 5 days ago [-]

'18. By Keith Devon on September 7, 2018 at 11:04

If Google only cares about a faster, more semantic web, then why not just give an even bigger ranking boost to faster, more semantic websites? Where does the need for a new standard come in, other than to gain more control?'

The above is a comment found in the OP.

Is there a requirement that AMP sites host resources with Google?

If there is, then Google has hijacked the purported goal of of promoting websites that consume fewer client resources (and are therefore faster) -- arguably a worthy cause -- in order to promote the use of Google's own web servers,[1] thereby increasing Google's data gathering potential.

If there is no such requirement, then is it practical for any website to host an AMP-compliant site, without using Google web servers?

If not, then AMP sure looks a lot like an effort to get websites to host more resources on Google web servers and help generate more data for Google.

1. When I use the term 'web servers' in this comment I mean servers that host any resource, e.g., images, scripts, etc., that is linked to from within a web page (and thus automatically accessed by popular graphical web browsers such as Chrome, Safari, Firefox, Edge, etc.)

mrep(3980) 5 days ago [-]

Discussed last year with 340 comments: https://news.ycombinator.com/item?id=17920720

buboard(3489) 5 days ago [-]

another attempt to murder it https://news.ycombinator.com/item?id=20599752

dang(179) 5 days ago [-]

That was Sept 2018.

dang(179) 5 days ago [-]

A related thread from a few months ago:


chiefalchemist(4045) 5 days ago [-]

re: 'Google is also the reason AMP sees any kind of adoption at all. Basically, Google has forced websites – specifically news publishers – to create AMP versions of their articles.'

Actually, that's not quite right. Publishers have forced Google to forced publishers to use AMP. That is, publishers can't control themselves and are adding more and more and more bloat to their pages.

I am by no means a Google and/or AMP fan, but the truth is most sites have no respect for the receiving device and/or connection speed.

tyingq(4179) 5 days ago [-]

It's frustrating to me that anyone buys that the driver behind AMP was performance. A Trojan horse has to have some plausible reason to let it in the gates. That plausible reason isn't the actual reason it exists.

If performance really mattered to Google, it would influence SERP and Carousel position in a meaningful way.

oarsinsync(10000) 5 days ago [-]

Neither does Google. They prioritise AMP pages over regular pages, regardless of what the regular page looks like.

EDIT: @afiori comments below that this may no longer be the case.

throwaway13337(4026) 5 days ago [-]

The correct response to bloat is to penalize it in search. If they can't get rid of bloat, better quality sites can rank.

This would likely mean sites with less tracking and ads - two things Google has a vested interest in.

ravenstine(10000) 5 days ago [-]

How many average people are walking around saying 'I've quit the web! It's too slow!'?

Nobody. It's a non-issue because, despite the faults of the web, the average person is still clicking on chum-boxes, sharing clickbait, and using Google services.

Google isn't trying to save the web. They're trying to become the web. And they've already made their money, so what makes you think they care out of the goodness of their hearts?

mqus(4202) 5 days ago [-]

But then Google shows up and demands that all that cruft has to be added back to make AMP and normal pages the same again. Doesn't this kinda defeat the purpose? I thought I can maybe understand AMP pages as some reader mode, fast and no-bullshit content, but now they look like they just want to push their kind-of proprietary tech.

danShumway(4077) 5 days ago [-]

This is a really old article, but as long as we're here: just a quick reminder that the AMP standard still includes platform-specific components that favor individual companies[0] over smaller creators. It's still not clear what will happen to the components when those services disappear[1], and it's still not clear whether Google has the guts to tell someone like Facebook that a new component feature isn't performant enough to be included.

Quick reminder that the only way to do captchas in AMP is to use Google ReCaptcha.

There are a lot of reasons to hate AMP, but one big reason I hope doesn't get drowned out is that it's not just anticompetitive in the sense of handing control of traffic or hosting to Google. It's anticompetitive in the sense of reducing functionality on the web to a handful of large corporations that have every incentive to reduce diversity and place harsher performance restrictions on competitors than they place on themselves.

[0]: https://amp.dev/documentation/components/?format=websites

[1]: https://amp.dev/documentation/components/amp-vine/?format=we...

buboard(3489) 5 days ago [-]

a website owner converting to amp is no longer an owner, it's a gig worker for google.

mcv(4213) 5 days ago [-]

> 'Quick reminder that the only way to do captchas in AMP is to use Google ReCaptcha.'

That is terrible. ReCaptcha is the worst. Also, ReCaptcha seems to discriminate against Firefox, and if AMP discriminates against other captchas, this might actually count as monopolistic abuse by EU rules.

shadowgovt(10000) 5 days ago [-]

That makes sense. Since AMP is intended to be a more optimized site structure than the average website, pages adhering to the AMP standard should increase the performance of the web on average, whether or not the site actually gets indexed into Google's localized caches.

We have HNers complaining about bloated JavaScript pages all the time. Maybe Google has actually found a way to tip the market away from those page designs?

shkkmo(10000) 5 days ago [-]

It would be easy to penalize pages for load time, Google's goal is something different.

ehnto(4211) 5 days ago [-]

If it were just an optional web standard, that might be okay. But it's a system that removes control from webmasters and hands it over to Google, and Google are using their incumbent advantage to coerce you into the AMP system. I can build a site just as fast as AMP without sacrificing the UX of the web itself in the process, and I intend to do so. But there are many businesses jumping on the AMP bandwagon because they think there will be an SEO benefit.

As a user, I hate clicking on AMP links by mistake. It's not even clear where you are, is it a website, is it still Google, how do I get to the website from here? Who even served me the content? It says it's from this website, but it wasn't?

rjmunro(4110) 5 days ago [-]

Have you ever visited a news site without AMP enabled? It's literally impossible to use. Popups flying all over the place, unwanted advertising videos loading, megabytes of tracking JS loaded etc. AMP is forcing publishers to make sites people can actually use.

buboard(3489) 5 days ago [-]

Can you name such a site that is impossible to use? I 'm quite sure it sees so few visitors that it doesn't care for disappointing them. And if they re bloated as hell - why would they care about amp anyway?

Doctor_Fegg(3372) 5 days ago [-]

Yes? The Guardian, all the time? And... it's fine?

JohnFen(10000) 5 days ago [-]

> Have you ever visited a news site without AMP enabled?

Yes, but I don't have any of those issues. A decent web browser fixes those problems.

nwallin(10000) 5 days ago [-]

Use an ad blocker. Boom, problem solved.

dredmorbius(199) 5 days ago [-]

My alternative is generally to load the site in https://outline.com

As with the former Readability, or Pocket, this offers a simplified page view.

On Firefox or Safari, use Reader Mode.

(Ironically, Chrome has a Reader mode which is 1) automatically enable, 2) not disablable, and 3) is uniformly styled horribly.

I believe there are now browsers and/or extensions which enable Reader View by default, or at least on specified domains/sites.

I've got a modicum of sympathy for Google here. Yes, the Web is a problem, and HTML's fast-and-loose attitude 'be generous in what you accept, conservative in what you emit' has turned out to be a long-term liability.

Allowing Google and Google alone to play both sides of the deal in specifying and benefitting from the standards, is a flagrantly glaring conflict of interest (and very likely antitrust violatio). But the underlying stated concerns are real. And absent some entity with the ability to tell website publishers 'no, your cavalier so-called HTML Does Not Play Here', the descent into further levels of markup and Javascript hell will continue.

I've found the most useful process for re-rendering sane HTML from most websites is to dump to plain text first (w3m or lynx, if they'll handle the site, copy/paste if not), and then re-add whatever minimal markup is actually required (usually via Markdown), then generate clean HTML.

Actual content payload is often only a few single-digit percent, and often far less, of the page's markdown. And that's excluding additional asset loads (CSS, JS), let alone image and media files.

Again: until there's a cost to publishers for pulling this crap, we're going to see more of it.

laurent123456(1242) 5 days ago [-]

I use duckduckgo with Firefox so I never see any AMP website and it works fine. Firefox allows you to have an ad-blocker though so it makes a difference.

It's all about choice, but if you want to use a Google search engine with a Google browser then indeed you probably have to browse the web the Google way.

kristopolous(3393) 5 days ago [-]

Yes. A news site, any news site, around 2014 would just smash my phone, making it completely unusable.

The page load would just be completely unpredictable. You start reading something and then it would fly down 2 pages and then up half a page because some asset container would load but then change its ideas of how much space it would need. Then you'd touch to scroll to continue reading and it would register as a tap and open up the ad.

Then the js hell would make the phone unusable, it felt like the people that wrote late 90s windows malware went into the business of making local news websites. it was complete and utter trash.

ninkendo(3450) 5 days ago [-]

Sure, I visit non-AMP sites all the time, and here's what happens:

Reader mode comes up and shuts everything else down. I have it on by default, it's great. It blows AMP the hell out of the water and it's the only thing that makes the 'unwashed' web (ie. anything that matches common search terms) tolerable.

bil7(10000) 5 days ago [-]

In the last few years my opinion of Google has gotten worse and worse. I dread to think of what it will look like in 5 or 10 years time.

brink(10000) 5 days ago [-]

There's hope. Microsoft somewhat turned it around after a change in directors.

privateSFacct(10000) 5 days ago [-]

I love these posts - this has been talked to death.

The last thread had some good illustrations comparing media sites non-AMP pages (the bloat from ads / javascript / etc was INCREDIBLE) to AMP pages.

Google puts a little icon next to amp pages at least some of the time. These pages usually load VERY quickly in my experience - somehow whatever AMP/Google is doing results in less bloated pages on these AMP pages.

I wouldn't be surprised if users start naturally gravitating to these pages for the better experience. I know I have sometimes just because I know the page is not going to trap me on their site if they are AMP. I can usually get back to search results with AMP, where other sites do a weird thing where they pop up a registration page in front, then even if you fight through that you have to fight through some registration redirects to get back.

I wish google would push down news sites I don't have memberships too though - banging on paywalls is annoying - I pay for a few sites already -> be great to have those be the ones surfaced most often.

buboard(3489) 5 days ago [-]

> I wouldn't be surprised if users start naturally gravitating to these pages for the better experience.

If this were true at all, these sites would have lost their mobile traffic. At worst, google could downrank them. This whole AMP thing is a glaringly terrible idea and such an incredible arm-twisting that should be scaring developers away from google.

It's not difficult to see amp for what it is: an evil attempt to turn websites to 'web snippet producers' that can only be monetized via google or die. It's web feudalism.

RonanTheGrey(3738) 5 days ago [-]

> somehow whatever AMP/Google is doing results in less bloated pages on these AMP pages.

The article links to another page that dives into this: AMP isn't actually creating less bloated pages.

The speed increase is because Google Search preloads the content for all AMP results so you're effectively getting a cached page when you click on it.

That is incredibly monopolistic behavior because it can't be reproduced any other way.

adrianmonk(10000) 5 days ago [-]

This article focuses on what it's like for web developers and for the web ecosystem, which are both important issues. But AMP is also really annoying for end users.

As an end user, AMP gets in my way and complicates my experience. There's extra work to figure out what's going on. This page is from whatever site but 'delivered by Google'. As an end user, my reaction is basically: what the hell does that mean, why is it here wasting my time and cluttering up my screen, and when can Google cut it out?

Then sometimes I go to share a link with a friend over Slack or whatever, I hit the share button, and the URL comes out all fucked up. I know they're going to look at the URL to figure out what it's about (because in the real world, people do look at URLs), so I feel compelled to fix it, so I have to back up out of there, then dig around in the UI to figure out how to get a real URL. Maybe 'open in chrome' will do it, or maybe I need to flip through the page itself to find where it gives a link to itself. I can never remember what works, and I don't want to have to.

I know AMP pages are supposed to load faster, and they probably do a little, but I would gladly trade that for simplicity.

Also, I would turn it off if they would give me the option, which wouldn't be hard but they don't, which tells me they don't want people turning it off.

IshKebab(10000) 5 days ago [-]

Yeah but on the plus side, it loads in a second, isn't sluggish to use, and isn't full of annoying fixed elements. Most non-AMP news websites which take many seconds to load and are really slow and annoying when loaded.

I get all the arguments against AMP, but 'annoying for users' surely isn't one of them.

mcv(4213) 5 days ago [-]

If as a user you don't want AMP, just don't use Chrome and Google Search. But for a website to miss out on the traffic from Google Search is a really steep price. We need everybody to switch to a different search engine.

dessant(3373) 5 days ago [-]

EU citizens can submit formal complaints to the European Commission for suspected infringements of competition rules.

Here is more information on how to file a complaint: https://ec.europa.eu/competition/contacts/electronic_documen...

If you believe Google engages in anti-competitive practices with AMP, you have the power to signal these issues, which may result in an investigation.

You can also share your concerns with a simple email to [email protected]

> You can report your concerns by e-mail to [email protected] Please indicate your name and address, identify the firms and products concerned and describe the practice you have observed. This will help the Commission to detect problems in the market and be the starting point for an investigation.

DrJaws(4155) 5 days ago [-]

I sent it a complain relating AMP to margrethe vestager over 2 years ago, when this was relevant


and they did nothing, I doubt they even study the case, nothing was on the table of the parliament related to this. Google have continued to abuse and will do more if no one stops them. It's important to complain again and again until they step in.

williamDafoe(10000) 5 days ago [-]

Websites WERE building horrible, non mobile news articles in HTML when AMP started at Google in 2015. The news articles were so slow and wasted so much bandwidth that many news orgs wrote bad apps (think CNN app; BBC app) to replace shit with even worse shit. That's what you get when you skimp on frontend engineers!!!

AMP gives little guys, the ones starting blogs and trying to grow, a shot at freedom of speech. The web was developing in a way that the big players like BBC and CNN would dominate with big budget winner-take-all walled gardens. AMP is one of Google's most anti establishment services, which means I'm sure Ruth will be killing it very soon!

This meant Google search on the mobile web was literally dying. Every year more and more content was being locked inside walled Gardens!! I was a maintainer of AMPHTML 2015 - 2018 at Google. The project is hibernating and loses a ton of money I know I worked on the budgets for flash memory for AMP. At the time Facebook and others were proposing proprietary non HTML news document formats. Google, to keep HTML alive, decided to cache amp for free, which subsidized hosting costs for ALL news websites. I hate it that now I have to switch browsers 2x to write an article comment, too! But news apps NEVER supported this AT ALL!! News apps NEVER supported a working search feature AT ALL!! News apps NEVER supported a good user experience or global search AT ALL!

If you want to rant, blame the bloatware mess that is HTML, it has almost at killed The mobile web, not AMP! AMP is Google's attempt to keep HTML alive on phones ...

JohnFen(10000) 5 days ago [-]

> AMP gives little guys, the ones starting blogs and trying to grow, a shot at freedom of speech

How so?

> AMP is one of Google's most anti establishment services

It looks like the exact opposite of that to me. This is Google's attempt at remaking the web in a way the enhances Google's control and power. That's pretty pro-establishment.

Eric_WVGG(4187) 5 days ago [-]

If Google has an option for logged in users to bypass AMP pages, I would not blame Google. They stubbornly refuse to do this, thus it is Google that is ruining mobile browsing for me.

(I would have written an iOS Safari extension that bypasses AMP years ago if Apple supported such a thing...)

katzgrau(4146) 5 days ago [-]

> AMP gives little guys, the ones starting blogs and trying to grow, a shot at freedom of speech.

As long as the big guys aren't on AMP yet. But an overlooked tradeoff is that the little guys are forced to play by Google's rules in terms of how and where they display ads, even the ones that aren't sourced by Google's ad network. It creates a completely uniform policy that undeniably benefits the scale of Google. A small publisher simply cannot differentiate their ad offerings. If you view that as a good thing for the end user, that's fine, but it's certainly not in favor of the little guy. Little guys depend on differentiation in every area of their business to effectively carve out a niche against a giant like Google's ad network.

mysterydip(10000) 5 days ago [-]

Wouldn't ranking results by size of page have pushed sites towards more mobile friendly lightweight pages?

coleifer(3068) 5 days ago [-]

Nice card-stacking.

Seriously, nothing is going to kill the mobile web more than Google continuing to overreach and use bait-and-switch tactics on publishers. Oh, sure, AMP is good for the 'google-mobile-web experience', but bad for an open web.

tomcooks(10000) 5 days ago [-]

> AMP gives little guys, the ones starting blogs and trying to grow, a shot at freedom of speech.

> AMP is one of Google's most anti establishment services

You're either writing satire I don't get, or work for Google.

How exactly does a walled garden give you free speech? Especially when it's provided by who profit the most from you not leaving said garden? While also forcing you to bypass standard practices?

Utter nonsense, unless it's a joke I'm not getting.

wil421(4074) 5 days ago [-]

I think you are dead wrong. 2015 didn't mark some ah-ha moment when AMP came along finally we were able to use web on mobile. Most of the websites that did and still do have problems are auto-playing video news sites or sites with way too many ads than necessary.

AMP is just a step above the top results boxes Google puts on the results page that are scraped from other websites. See the other front page article about Google repeatedly stealing Genius lyrics.

Google shouldn't become the new AOL.

pembrook(10000) 5 days ago [-]

AMP absolutely does not give the little guy a leg up.

In fact, it's only the massive news sites that have the developer time to support AMP, meanwhile the little guy has to play around with terrible Wordpress plugins and spend hours fiddling with it just so Google will properly crawl their site.

And don't even get me started on static site generators. AMP support is shoddy at best and a giant PITA for 99% of static site generators. Wordpress is one of the main reasons the web is so slow, yet AMP gives power to Wordpress since it's the only way non-technical blog owners can support AMP.

AMP forces small time blogs and content sites to waste time building two versions of their website to rank alongside the big boys. How does this help the little guy?

buboard(3489) 5 days ago [-]

Seriously, is html performance a real issue? Mobile traffic keeps growiong and growing and growing, according to google, who now crawls most sites mobile-first! Phones have 4 cores and download 300-MB games daily. There is absolutely no need for this abomination. If it cared, google could threaten to derank slow sites for slow phones and the average website size would be slashed to half in a week!

> AMP gives little guys, the ones starting blogs and trying to grow, a shot at freedom of speech.

woosh, i now realize u re joking

dillonmckay(10000) 5 days ago [-]


It was the third party ad networks that caused performance issues on the news sites, as well as distributing malware.

There were alot of Flash ads for awhile, as well.

Definitely an issue prior to 2015.

gtirloni(2272) 5 days ago [-]

> If you want to rant, blame the bloatware mess that is HTML, it has almost at killed The mobile web, not AMP

The author of this article is pretty much praising the bloatware mess that we have and wants more. I'm also puzzled.

To an end user, this article just gave the best highlights about AMP.

qxnqd(10000) 5 days ago [-]

Instead of blaming Google for creating and pushing AMP, how about blaming publishers for forcing Google to create AMP?

Lev1a(10000) 5 days ago [-]

That's some cheap bait you have there. Kindly go away.

bluetidepro(3415) 5 days ago [-]

AMP Is ruining mobile web. I cannot stand it. If it was actually made to be fluid, I'd see the value. But it's such a terrible UX, and so janky with the way it 'pops in', and messes up 'browser back' abilities. Out of all the shitty things Google has ever done, AMP is #1 to me on that list.

Orrrrrr just give me a damn option to turn it off, if I want. I will never understand why companies force people into these types of major UX decisions on their behalf. Stop assuming every user is stupid. Sure, make it the default, I don't care about that for the everyday user, but for something as fundamental as the browser, I should have an option to turn off every single Google opinion they bake in.

mcjiggerlog(3940) 5 days ago [-]

If you use DuckDuckGo you don't have to deal with AMP at all. Giving it a go on mobile is a good way to see how well it works for you too as searches tend to be less mission-critical compared to desktop-based searches.

wait_a_minute(10000) 5 days ago [-]

It's also really annoying when you want to copy a link to send to people, but it copies the amp link instead.

john-radio(10000) 5 days ago [-]

What is an example of a web site I might have used that breaks the Back button because it uses AMP?

Kique(10000) 5 days ago [-]

I love AMP sites that do it the right way, like Politico. Keeps the real domain, loads fast, clean interface. I wish more sites were like this. I think the first version of AMP where the URL was always 'google.com/amp/politico/sdgffsdf' was awful but you can now keep the correct domain and I sometimes prefer it to the regular version of a lot of sites.


JohnFen(10000) 5 days ago [-]

> I think the first version of AMP where the URL was always 'google.com/amp/politico/sdgffsdf' was awful

But that has the advantage of making it easier to find the real page rather than the AMP page.

tyingq(4179) 5 days ago [-]

It's nicer than the original AMP setup, but still awful for publishers.

For any user that navigates to your AMP page from a Google search...

The publisher gives up the most important piece of screen real estate, and Google highjacks left/right swipes to navigate to your competitors. And, they hijack the back button post swipe too...back equals 'back to Google'...not back to the page I swiped from.

It is pretty much like early AOL. A semi walled garden. It offers some speed benefit for users, but way more benefit to Google.

buboard(3489) 5 days ago [-]

is this served from politico's servers and how is it different from a stripped down version of their site?

taf2(3748) 5 days ago [-]

Sorry isn't this just another web developer complaining that they have to use someone else's technology? I mean the first time I looked into supporting AMP I had that same visceral reaction 'f this it sucks'. After spending sometime with it it's not that bad and I get the value prop - AMP is content being served up directly on google.com or cloud flare or any cdn provider that is essentially the evolution of google's cache. Pretty cool if you stop to think about it. In the past to access google's cache you got a pretty fast page response but it was kind of also very broken because not all of the essentials would load correctly. Now with AMP you have some control over that cached page that is offloaded for free to google's servers. I think there are some UX issues for sure - IMO it's open source so you can always open a pull request/issue and try to make it better... or ignore it and continue to build something great... if you have something great people will find it whether you use AMP or not.

phpnode(2558) 5 days ago [-]

It doesn't matter how good or bad the technology is, or whether it's convenient or profitable to use it. AMP is a brazen attempt by Google to use their monopoly to take control of the web. The current version of AMP is not their intended final destination.

KirinDave(641) 5 days ago [-]

I had been told (and I have 0 special knowledge here, this is just what a consultant in this space explained to me a few years ago) that AMP boosted your placement specifically because latency was a scored and important factor.

As such, all you needed to do to get similar rankings was use any sort of CDN hosting for your page and you would get similar results to using AMP.

Also, it sorta seems to me like the author is complaining, 'I can't just do a minimum effort AMP page for the search juice, I actually have to make a functional AMP offering or not use AMP at all.' Strictly as a consumer, I feel like maybe Google is doing me a favor while telling off a publisher.

pembrook(10000) 5 days ago [-]

Not true. Google favors AMP content in "articles for you" on chrome mobile, as well as featured story carousels on search and inside Google discover. All of these areas can be a massive firehose of traffic.

If you make a living from a content site, you have to play ball and create AMP versions of all your pages.

OR, you can choose to lose to your competitors. Let's stop pretending like that's really a choice, or that any sizable share of users will ever switch to DuckDuckGo.

This is where we need government to step in and regulate Google's de facto monopoly on search.

bensochar(10000) 5 days ago [-]

CDN helps with the page load/latency variable of Google' PageRank but won't equal AMP.

To get AMP-like speed you'd need: CDN, no render-blocking javascript, minimized image files, 'lazy-loaded' assets & inlined CSS for 'Above the Fold' content. On the server side you want to cache content with something like Varnish & send it over an 'Edge' network like Akamai or Fastly. Ideally everything is served over HTTP2 or SPDY.

Doing all that replicates what AMP does.

lern_too_spel(4220) 5 days ago [-]

The first part of your post is incorrect, AMP is instant because it is prerendered, which just using a CDN can't achieve.

The second part is correct. I hate Reddit AMP results, and I'm happy that Google is telling them to fix it. I'll be even happier when they and other search engines demote Reddit AMP pages that do not match the canonical pages.

vxNsr(2963) 5 days ago [-]

I'm really not looking forward to the google web...

processing(2980) 5 days ago [-]

Where every 'website' is an app in the Google App store. Google will serve ads automatically as freemium app (no code required - just agree) or be able to process subscription fees via Google pay and own the eco systems (Chrome, Android, Chromebook, Strada).

rjmunro(4110) 5 days ago [-]

Real people aren't moving to Firefox or Brave, they are moving (or have moved) to the Facebook mobile app. It has instant articles, (https://instantarticles.fb.com/) an amp-like lightweight way to load news stories, all hidden inside Facebook's system. Don't worry about searching for news, Facebook will just give you whatever it feels like.

Then there's Apples iOS only 'News' app. Who knows how that works.

AMP is clearly a far more open competitor to the above. It is saving news publishers from themselves.

appleflaxen(3623) 5 days ago [-]

I don't think the rewritten title is faithful to the link.

I understand why the original title of 'Google AMP Can Go To Hell' was rewritten, as it is unnecessarily inflammatory and clickbaity, but

'Google wants websites to adopt AMP as the default for building webpages'

really isn't the point of the article, at all. It is necessary background for the article, but the point of the article as I read it is that there are specific implications of the AMP effort that are nefarious, insidious, and dangerous to the open web. A rough outline would be

* Background: Google wants websites to adopt AMP as the default for building webpages

* Contention 1: Adopting AMP is technically difficult

* Contention 2: Adopting AMP will harm non-google web properties in the future

* Call to action: there are several things you can do in order to '...fight back. You could tell them to stuff it, and find ways to undermine their dominance. Use a different search engine, and convince your friends and family to do the same. Write to your elected officials and ask them to investigate Google's monopoly. Stop using the Chrome browser. Ditch your Android phone. Turn off Google's tracking of your every move. And, for goodness sake, disable AMP on your website.'

I don't think any reader would think the Background statement provides the best summary of the posts thesis; it's clearly the call to action. By changing it to the background statement, it changes the 'meaning' of the article completely.

So 'Google AMP Can Go To Hell' is much better than the revised title, and if it's unacceptable, then 'Fight back against Google AMP' is both faithful and uses the author's own terminology.

I think in this instance that the editorial control exerted over the post title could have been better considered (notwithstanding the same change made in the prior submission; the same mistakes were made then, as well)

dang(179) 5 days ago [-]

The main thing we're trying for in title changes is to use representative language from the article itself. Since it does use the phrase 'fight back', we can go with your suggestion.

Joe-Z(10000) 5 days ago [-]

Nice one HN, by changing the title (to something that is not the title of the actual submission!) you just inverted the meaning of the first sentence of the top comment at the time of writing [0]

If you are going with the second headline why not include the 'Tell them no.'-part too?

[0] https://news.ycombinator.com/item?id=21703750

dang(179) 5 days ago [-]

Because HN has an 80 char limit on titles.

Edit: we've since changed the title from 'Google wants websites to adopt AMP as the default for building webpages (2018)'. See https://news.ycombinator.com/item?id=21705610.

Historical Discussions: The latest version of Slack has a setting to disable the WYSIWYG editor (December 03, 2019: 780 points)

(780) The latest version of Slack has a setting to disable the WYSIWYG editor

780 points 6 days ago by agwa in 2998th position

twitter.com | Estimated reading time – 1 minutes | comments | anchor

Welcome home!

This timeline is where you'll spend most of your time, getting instant updates about what matters to you.

Tweets not working for you?

Hover over the profile pic and click the Following button to unfollow any account.

Say a lot with a little

When you see a Tweet you love, tap the heart — it lets the person who wrote it know you shared the love.

Spread the word

The fastest way to share someone else's Tweet with your followers is with a Retweet. Tap the icon to send it instantly.

Join the conversation

Add your thoughts about any Tweet with a Reply. Find a topic you're passionate about, and jump right in.

Learn the latest

Get instant insight into what people are talking about now.

Get more of what you love

Follow more accounts to get instant updates about topics you care about.

Find what's happening

See the latest conversations about any topic instantly.

Never miss a Moment

Catch up instantly on the best stories happening as they unfold.

All Comments: [-] | anchor

asmint3(10000) 6 days ago [-]

Don't understand the need for a WYSIWYG editor in the first place. It's not a word processor. The need to add formatting to a Slack message should be rare enough that the WYSIWYG overhead just isn't worth it.

gempir(4178) 5 days ago [-]

They want to make the product appeal to all users and not just techy ones. And If I product managers or CEO or whoever can print his message bold he's happy.

Very few non techy people know what markdown even is. (From my own experience asking friends etc.)

tempsy(2663) 6 days ago [-]

Amazing to see how Slack turned from something everyone raved about to something everyone is sort of annoyed with and critical of. Maybe that isn't that surprising for most products as they age/mature but it seems particularly acute with Slack.

anonytrary(4070) 5 days ago [-]

Slack is nothing compared to JIRA in that regard. JIRA might be worst modern software I've ever been forced to use. I typically have no problems working with Slack -- everything just sort of works as I expect it to 95% of the time. The fewer questions I have to ask, the better the design.

agapon(10000) 6 days ago [-]

People and companies alike keep forgetting that the best is the enemy of the good. They achieve success with the good and they try to get ahead of themselves with the better. Sometimes they succeed :-)

Someone1234(4213) 6 days ago [-]

"There are only two kinds of languages: the ones people complain about and the ones nobody uses." -- Bjarne Stroustrup

I've found this quote extremely true, but not just with programming language but with all things. People complain about Slack because they use Slack a lot, so minor quibbles become a daily frustration.

It is when people go quiet about a product that you should really worry. They're either extremely happy or aren't using it, might be critical to find out which.

ses4j(4189) 6 days ago [-]

It was better than everything that came before it for it's purpose, so people were happy with the improvement.

But then every time they change something, unless it is just 100% obviously better for everyone, like a speedup, it's gonna annoy some people.

t34543(10000) 5 days ago [-]

Slack used to care about user choice, such as providing a IRC gateway. All of those customizable features have been walked back in favor of "its not us, it's you" mentality.

lilyball(10000) 6 days ago [-]

Looks like this doesn't bring back the markup help text in the lower-right corner, which is a shame.

I really wish they'd just highlight your markup in the appropriate style while leaving the markup intact. That would solve the 'I want to see what the markup I just typed will do' without any of the problems of the WYSIWYG approach. Basically just syntax highlighting for markup.

mtremsal(10000) 5 days ago [-]

This is pretty much what Bear does. It's great.

danShumway(4077) 6 days ago [-]

This is a good reminder that sometimes complaining about companies on social media like Hackernews and Twitter does actually help. Not always, maybe not even usually. But Slack went from, 'no, we have no plans to adjust this', to 'yes, we'll include a toggle', and the reason was that a bunch of people publicly complained and wrote a bunch of messages to support.

It is a very, very, very small victory, and if I was going to choose a victory, Slack's Markdown editor would not be very high on my list of priorities -- but I'll take it.

Jamwinner(10000) 6 days ago [-]

Isnt that just a silent admission that modern tech support is woefully broken, more than a twitter is good message?

umanwizard(4025) 5 days ago [-]

I emailed them to tell them I would stop using slack completely and start emailing my coworkers instead if they didn't revert it. I assume hundreds of others sent similar mails.

jimueller(10000) 5 days ago [-]

How do you know it was social media? I contacted Slack's customer support and asked for a toggle. It could have been my request. The customer support chat was fast, friendly, and professional. I couldn't ask for more.

asmosoinio(3164) 5 days ago [-]

I think they also got a lot of feedback directly through /feedback in Slack and support emails.

So they were not necessarily reacting ONLY on public outcry - they have direct feedback from paying customers as well.

soneca(1515) 6 days ago [-]

It is interesting to me that the rant post about Slack's WYSIWYG editor was the 18th most upvoted HN post of all time.

No idea what conclusion should I draw from it, if any at all, but surprising for me at least.

criddell(4162) 5 days ago [-]

I'd love to know in a year what percentage of users disable the WYSIWYG editor.

It blows me away that there are almost 2000 people working at Slack. The product feels like it is done and they should be able to downsize to a skeleton crew and slip into maintenance mode.

jmcqk6(4108) 6 days ago [-]

I don't think there is anything strange about that. Slack is a widespread used tool, which introduced a feature that made it very difficult and frustrating to use.

trynewideas(4215) 6 days ago [-]

lots of hn readers use slack

lots of hn slack users disliked the feature

lots of hn slack users who disliked the feature went straight to hn to vote up any post complaining about it

that's how communities around tech broadly tend to work

ajkjk(10000) 6 days ago [-]

I know what I think it is: there's a great deal of latent frustration among the rank and file of this industry, that the people running the show are basically greedy and/or incompetent morons.

For people who are in it for money or status or feeling successful it's not a big deal. For the people who care about products, tools, elegance, and candor, it's absolutely maddening.

It especially doesn't help for a company to spew fake bullshit when it pointlessly changes something. What would it take to get a little candor? Why do all the corporations end up being so candorless? Why does working in this field have to be so depressing?

mdszy(10000) 6 days ago [-]

'I apologize for the disruption to your existing workflows. Our aim is to build an editor that works for all Slack users to better format their messages and clearly communicate in channels, regardless of their technical expertise. While we are taking all feedback on board, disabling the new formatting tool isn't an option that we will be offering.'

ajnin(4067) 6 days ago [-]

> an editor that works for all Slack users

That's a common pitfall many companies, including Google, seem to be falling into. Everyone uses tools a bit differently. Imagine that each feature is an axis in the space of all the ways to use the tool. Every person will be a different point somewhere in that space. If you try to enforce only one way to use the feature, you carve a slice in the useful space of your tool. And the more features you have, the smaller is the space defined by the intersection of all those 'works for all' features, and the fewer users it actually helps !

The 'one tool that works for everyone' simply does not exist. Thankfully they realized that this time, but it's more often not the case.

geoffreyhale(10000) 6 days ago [-]


gropius(10000) 6 days ago [-]

> regardless of their technical expertise

Good luck with that. Trying to make one product powerful enough for the techies yet simple enough for the normies is the software equivalent of Napoleon attacking Russia.

ArmandGrillet(2448) 6 days ago [-]

Dark mode, better performances, option to disable divisive new features: Slack might be making it harder to focus at work but the product itself improved quite a lot in 2019, kudos to the PMs.

beshrkayali(2219) 5 days ago [-]

Incompetence deserves no praise when the mistake gets fixed.

encoderer(3846) 6 days ago [-]

No PM anywhere gets credit for Dark Mode adoption in 2019.

The other 2 amount to solving self-inflicted problems.

You are kind.

gota(4124) 6 days ago [-]

Side comment: I've tried to use Slack's dark mode but had to quit. I was constantly having to download images to be able to read the black-over-transparent background of matplotlib images (that's apparently the default for the axis, markings, legends, etc.)

It's not a matter of changing my code, too, because I want to see the images posted by _other_ people

yellow_lead(10000) 6 days ago [-]

I'm waiting on the better performance :)

thenewnewguy(10000) 6 days ago [-]

> better performances

I don't mean to bash slack too hard, but it honestly feels like the performance has gotten worse over time for me. Certain actions like Up Arrow (edit previous message) and loading threads on the 'Threads' page seem to take way longer than they used to.

binaryblitz(10000) 6 days ago [-]

Kudos to the PMs?

They allowed a half-baked (if that) feature to be shipped out and then wouldn't back down from it for a week.

This reeks of PMs pushing a feature that isn't ready yet.

fortytw2(3883) 6 days ago [-]

It's pretty incredible to think that for all the billions of dollars around Slack it's still fundamentally the same product with the same value add that it was when it first launched.

Sure, maybe it's faster or more efficient or whatever now, but what it is at heart is the _exact_ same thing it was on Day 1.

72deluxe(10000) 5 days ago [-]

And still just a web browser underneath!

If they used a native toolkit this bug wouldn't exist, no?

gsich(10000) 6 days ago [-]


peterwwillis(2589) 6 days ago [-]

A Tesla Model S is fundamentally the same product with the same value add as a Ford Model T. Sure, the Tesla may be faster or more efficient or whatever now, but what it is at heart is the exact same thing as the Ford was 111 years ago.

colechristensen(10000) 6 days ago [-]

Does it need to change?

I don't want Slack to be a different or more comprehensive tool.

ianstormtaylor(1380) 6 days ago [-]

In case anyone's interested, here's the Chromium bug(s) that's one of the main reasons the WYSIWYG editor has such issues in Chrome(/Electron):

- https://bugs.chromium.org/p/chromium/issues/detail?id=102937...

- https://bugs.chromium.org/p/chromium/issues/detail?id=608393

- https://bugs.chromium.org/p/chromium/issues/detail?id=608162

The gist of it is that they 'canonicalize' the DOM selection when you have two adjacent inline elements, so that it can only ever be at the ending edge of the first inline element. Which makes certain selection states impossible. And on top of that, they don't draw the cursor in the right spot when doing the canonicalization. So for instance, it might look like your cursor is here:

But it's actually here:

Drawn in the wrong place. And impossible to put it at the beginning of 'two' at all! It's crazy!

I don't know how much the take 'starring' into account, but it can't hurt to star them in the hopes they're fixed sooner.


Edit: If you want to see the craziness in action, check out this sandbox:


1. Try to put your cursor at the end of the word 'one'

2. Try to type a character at the start of the word 'two'

epistasis(3798) 6 days ago [-]

It's mystifying, absolutely mystifying, to me that Slack shipped that editor when both electron and chrome show such obvious problems.

What sort of out of control process let this get out, and also let them ignore bug reports for so long?

This significantly lessens my opinion of Slack and their internal culture. In prior threads there were rumors that Slack employees were hesitant to give negative feedback internally, due to culture. I don't put much stock in rumors, but I'll be paying attention now to that possibility.

jrwr(10000) 5 days ago [-]

Wow, Firefox handles it just fine, Chrome just flat out breaks, that's a interesting bug!

tasuki(10000) 6 days ago [-]

Not sure Chromium is to blame here. All the WYSIWYGs in all the environments I've ever seen I found fiddly and unpleasant to use.

Wowfunhappy(10000) 5 days ago [-]

My request for every rich text WYSIWYG editor is as follows: please never change the text style of my cursor unless I explicitly say so. Regardless of the cursor's location.

Toggling bold should function identically to toggling caps lock: if I press the bold button, any letters I type should be bold until I press the bold button again. Caps lock does not disable itself when I click into a non-all-caps block of text, and neither should bold.

This would be annoying sometimes, but it would help far more often than it would hurt. For some reason, however, nobody does it this way.

umanwizard(4025) 5 days ago [-]

I really hope they don't think they can fix a few "bugs" and make this thing usable.

I've noticed this in the past, when a previous employer got rid of their Markdown interface for interview candidate feedback. In response to everyone's angry comments about their flaky piece of garbage WYSIWYG replacement, they responded "please file bugs if our Markdown-like shortcuts aren't working right!"

They were completely missing the point since even if their tool was 100% bug free (which it wasn't, and I'm sure still isn't), WYSIWYG editing will NEVER feel like text editing, and I NEVER want it in a critical work tool.

Honestly people thinking their WYSIWYG editor "supports markdown" because "Star Star" is an alternate shortcut for Ctrl+B seems to be so severely missing the mark that I wonder if they're deliberately insulting my intelligence.

marknadal(2846) 6 days ago [-]

Um, no, this is not impossible, us editor folks have been inserting 0-width space after the tag and selecting it for years.

The team the size of Slack should/could have easily worked around that.

Vinnl(525) 5 days ago [-]

It's been a while since I read this, but IIRC this article describes that bugs like these are exactly the reason why CodeMirror doesn't rely on contentEditable: https://codemirror.net/doc/internals.html

> CodeMirror 1 was heavily reliant on designMode or contentEditable (depending on the browser). Neither of these are well specified (HTML5 tries to specify their basics), and, more importantly, they tend to be one of the more obscure and buggy areas of browser functionality—CodeMirror, by using this functionality in a non-typical way, was constantly running up against browser bugs.

pointcloud(10000) 5 days ago [-]

On the other hand, Notion (http://notion.so/) had exactly same bug a few weeks ago in their Markdown editor, and they have fixed it.

agluszak(4204) 6 days ago [-]

Still waiting for an option to disable 'drafts' (channels all of a sudden jumping to the top of the channel list)

ceejayoz(2116) 6 days ago [-]

I'd be 100% fine with drafts if it cloned the channel listing instead of moving it.

pizza234(4203) 6 days ago [-]

I actually like that; it reminds me to cleanup after myself :-)

munk-a(4119) 6 days ago [-]

I've been loving their responsiveness lately /s. I `ctrl-k` jump to another convo and start typing and the lag between `ctrl-k` closing and switching to the new convo is enough that I end up typing a few characters into the prior conversation - then I need to jump back to clear out the draft.

Their channel listing is generally crap though, I gave up on trying to navigate it and just use `ctrl-k` jumping for everything, which isn't terrible because it means I spend less time on the mouse.

saltcured(10000) 6 days ago [-]

Ha, I wish I could disable 'threads' so nobody else can use them...

barrkel(3135) 6 days ago [-]

Your mistake is showing a channel list. Hide unread channels; it's the only way to use Slack IMO.

pkamb(4124) 6 days ago [-]

Now support real Markdown.

munk-a(4119) 6 days ago [-]

In particular their ``` code block support is super frustrating, especially since copying a code block produces invalid markdown even if it was originally written properly, they like to pull new lines out of it.

m463(10000) 6 days ago [-]

Why is markdown so popular? It seems like a hacked up language instead of a language that was designed.

wool_gather(4042) 6 days ago [-]

Well, they put it in the 'Advanced' settings section, which I interpret as reluctance, but it's there.

munk-a(4119) 6 days ago [-]

I am a bit afraid they might be supplying it for just a limited while for them to fix it up and re-enable it for everyone.

cyrusmg(4100) 5 days ago [-]

Do not forget you can say thank you via the /feedback command in Slack

stephenr(4028) 5 days ago [-]

'Thank you for giving us a hard to find, poorly named option to opt out of your buggy shit'?

ripley12(4220) 6 days ago [-]

The ability to format markup with keyboard shortcuts seems to be gone now. Previously you could select some text, type ctrl-i, and it would add the necessary markup to make the selected text italic. Same for bold text etc.

Still, I'll take it over the WYSIWYG editor which was a nightmare for code snippets.

KORraN(4202) 6 days ago [-]

You're right, after disabling WYSIWYG editor keyboard shortcuts don't work. That means I'm staying with extension for Firefox...

njacobs5074(4221) 5 days ago [-]

I'm a bit amused because:

1. Their customer support emphatically told me they wouldn't make this an option AND

2. They didn't even need to do a new release to turn on the option.

At least their engineering team is still making good decisions :)

anonytrary(4070) 5 days ago [-]

Customer Support having outdated/bad communication with Product is not surprising at all. For every person where the person who makes the decisions is not the person to relay the decisions, this error can be introduced. It's like a big game of telephone.

vhogemann(10000) 6 days ago [-]

Bring back the IRC gateway.

bfrog(4217) 6 days ago [-]

Just bring back IRC

wodenokoto(4021) 5 days ago [-]

I don't get the commotion about this. The old slack editor was really poor, and the new one is just as poor, but differently.

But at least with the WYSIWYG you can see before you send what parts of the markup it got wrong.

MrOxiMoron(10000) 5 days ago [-]

yes, but with the new one there are situations where you can't correct a mistake once you see it.

just start with a backtick escaped piece and then try to add extra text at the beginning of the line...

mikl(3691) 6 days ago [-]

I was never a fan of their pseudo-markdown (it's so close, but the minor deviations are annoying), but it's a great to have it back instead of that other train-wreck of an editor. At least you can now edit text without regularly wanting scream in frustration.

munk-a(4119) 6 days ago [-]

For someone who habitually uses old MUD style emote lead-ins I was constantly needing to back edit comments to fix:

* sigh * this was disappointing

being turned into a list.

Historical Discussions: Why Taxpayers Pay McKinsey $3M a Year for a Recent College Graduate Contractor (December 05, 2019: 771 points)

(773) Why Taxpayers Pay McKinsey $3M a Year for a Recent College Graduate Contractor

773 points 4 days ago by jashkenas in 132nd position

mattstoller.substack.com | Estimated reading time – 11 minutes | comments | anchor


Welcome to BIG, a newsletter about the politics of monopoly. If you'd like to sign up, you can do so here. Or just read on...

A few days ago, Ian MacDougall came out with a New York Times/ProPublica piece on how consulting giant McKinsey structured Trump immigration policy. Lots of people cover immigration. I'm going to discuss why the government buys overpriced services from McKinsey. (Spoiler: It goes back to, of course, Bill Clinton.)

First, I'll be doing a book talk in D.C. on the evening of Dec 10th and one in New York on the evening of Dec 18th for my book Goliath: The 100-Year War Between Monopoly Power and Democracy. There's info below my signature with details. Business Insider named Goliath one of the best books of the year on how we can rethink capitalism. If you have thoughts on the book, as always, let me know.

In case you're in the listening to podcast mode, I was recently on Lapham's Quarterly podcast and Pitchfork Economics with Nick Hanauer to talk monopoly and Goliath.

And now...

As regular readers of BIG know, my basic theory of the world is that most of our political economy problems are caused by these guys being in charge of everything.

The Point of McKinsey: Charging $3 Million a Year for the Work of a 23-Year Old

McKinsey has a lot of high-flying rhetoric about strategy, sustainability, and social justice. The company ostensibly pursues intellectual and business excellence, while also using its people skills to help Syrian refugees. That's nice.

But let's start with what McKinsey is really about, which is getting organizational leaders to pay a large amount of money for fairly pedestrian advice. In MacDougall's article on McKinsey's work on immigration, most of the conversation has been about McKinsey's push to engage in cruel behavior towards detainees. But let's not lose sight of the incentive driving the relationship, which was McKinsey's political ability to extract cash from the government. Here's the nub of that part of the story.

The consulting firm's sway at ICE grew to the point that McKinsey's staff even ghostwrote a government contracting document that defined the consulting team's own responsibilities and justified the firm's retention, a contract extension worth $2.2 million. "Can they do that?" an ICE official wrote to a contracting officer in May 2017.

The response reflects how deeply ICE had come to rely on McKinsey's assistance. "Well it obviously isn't ideal to have a contractor tell us what we want to ask them to do," the contracting officer replied. But unless someone from the government could articulate the agency's objectives, the officer added, "what other option is there?" ICE extended the contract.

Such practices used to be called "honest graft." And let's be clear, McKinsey's services are very expensive. Back in August, I noted that McKinsey's competitor, the Boston Consulting Group, charges the government $33,063.75/week for the time of a recent college grad to work as a contractor. Not to be outdone, McKinsey's pricing is much much higher, with one McKinsey "business analyst" - someone with an undergraduate degree and no experience - lent to the government priced out at $56,707/week, or $2,948,764/year.

How does McKinsey do it? There are two answers. The first is simple. They cheat. McKinsey is far more expensive than its competition, and is able to get that pricing because of its unethical tactics. In fact, the situation is so dire that earlier this year the General Services Administration's Inspector General recommended in a report that the GSA cancel McKinsey's entire government-wide contract. Here's what the IG showed McKinsey was eventually awarded.

The Inspector General illustrated straightforward corruption at the GSA, which is the agency that sets schedules for how much things cost for the entire U.S. government (and many states and localities, who also use GSA schedules).

What happened is fairly simple. McKinsey asked for 10-14% price hike for its already expensive IT professional services (which is a catch-all for anything). The government contracting officer said no, calling the proposal to update the firm's contract schedule with much higher costs "ridiculous." So McKinsey went to the officer's boss, the Division Director. In 2016, a McKinsey representative sent the following email to the GSA Division Director.

We would really appreciate it if you could assist us with our Schedule 70 application. In particular, given that you understand our model, it would be enormously helpful if you could help the Schedule 70 Contracting Officer understand how it benefits the government.

The pestering worked. The GSA Division Director seems to have had the contract reassigned and granted the price increase McKinsey wanted. The director also seems to have been lying to the inspector general, as well as manipulating pricing data, breaking rules on sole source contracting, and pitching various other government agencies, like the National Oceanic and Atmospheric Administration, to buy McKinsey services. Eventually the director straight up said, "My only interest is helping out my contractor."

From 2006 when McKinsey signed its original schedule price, to 2019, it received roughly $73.5 million/year, or $956.2 million in total revenue from the government. The inspector general estimated the scam from 2016 onward would cost $69 million in total overpayments. It's a scandal. But still, something about it doesn't quite make sense. Why would a government division director at the GSA seek to increase costs for the government? It's not bribery, since the IG didn't recommend firing or arresting the government official who pushed up costs (or at least that's not in the IG report).

The Industrial Funding Fee

And this gets to the second reason why McKinsey can charge so much, which has to do less with McKinsey and more with an incentive to overpay more generally. It's more likely something called the 'Industrial Funding Fee,' or IFF. The GSA's Federal Acquisition Service gets a cut of whatever certain contractors spend using the GSA's schedule, and this cut is the IFF. The IFF is priced at .75% of the total amount of a government contract. In the case of McKinsey, since 2006, "FAS has realized $7.2 million in Industrial Funding Fee revenue."

In other words, the agency of the government in charge of bulk buying isn't paid for saving money, but for spending too much of it. The IFF also incentivizes the GSA to get the government to outsource to contractors anything it can, simply to get more budget. The IFF has been creating problems like the McKinsey over-payment for a long time. In 2013, the GSA Inspector General traced a similar situation with different contractors. Managers at GSA overruled line contracting officers to raise prices taxpayer pay for contractors Carahsoft, Deloitte and Oracle. Government managers at GSA micro-managed and harassed their subordinates and damaged the careers of contracting officers trying to negotiate fair prices for the taxpayer.

How did the GSA get such a screwed up incentive as the Industrial Funding Fee? Well, in 1995, to get the government to become more entrepreneurial as part of its "Reinventing Government" initiative, Bill Clinton's administration implemented the Industrial Funding Fee structure. It worked in generating money for the GSA. It worked so well that Congress's investigative agency found in 2002 that the GSA stopped having to rely on Congressional appropriations. It had so much extra money that it started to spend lavishly on its "fleet program," which is to say vehicle purchases. In other words, the GSA earned so much money by outsourcing the work of other government agencies to high-priced contractors that it just started buying fleets of extra cars.

By 2010, GSA had "sales" of $39 billion a year. While the GSA is supposed to remit extra revenue to the taxpayer, it often just stuffs the money into overfunded reserves. More fundamentally, the culture of the procurement agencies of government has been completely warped. The GSA's entire reason for existing was to do better purchasing for the government, using both expertise and mass buying power to get value. But now officials try to generate revenue by getting the government to spend more money on overpriced contractors. Like McKinsey.

Does McKinsey do a good job? The answer is that it's probably no better or worse than anyone else. I'm sure there are times when McKinsey is quite helpful, but it's in all probability vastly overpriced for what it is, which is basically a group of smart people who know how to use powerpoint presentations and speak in soothing tones. You can just go through news clippings and find areas McKinsey did cookie cutter nonsense. For instance, McKinsey helped ruin an IT implementation for intelligence services. In the immigration story, MacDougall shows that the consulting firm encouraged ICE to give less food and medical care to detainees. That's cruelty, not efficiency.

Still, it's not all on McKinsey. The Industrial Funding Fee is one reason paying $3 million a year for a 23-year old McKinsey employee instead of hiring an experienced person directly to do IT management has some logic to a government procurement division head. The policy solution here is fairly simple - kill the IFF fee structure and finance government procurement agencies through Congressional appropriations directly. Also follow the IG's recommendation and cancel McKinsey's contracting schedule.

At any rate, at some point decades ago, we decided that most political and business institutions in America should be organized around cheating people. In this case, the warped and decrepit state of the GSA leads to McKinsey-ifying the entire government. Mr. Clinton, you took a fine government that basically worked, and ruined it. McKinsey sends its thanks.

Thanks for reading. And if you liked this essay, you can sign up here for more issues of BIG, a newsletter on how to restore fair commerce, innovation and democracy. If you want to really understand the secret history of monopoly power, buy my book, Goliath: The 100-Year War Between Monopoly Power and Democracy.


Matt Stoller

P.S. I'll be giving book talks in D.C. and New York. Info is below.

December 10 (Tuesday) Washington, D.C. Book talk at Public Citizen at 5pm 1600 20th St., NW, Washington, DC 20009 Please RSVP to Aileen Walsh at [email protected]

December 18 (Wednesday) Brooklyn, NY - Old Stone House from 6:30-8pm A Conversation with Matt Stoller, author of "Goliath", and Zephyr Teachout 336 3rd Street Brooklyn, 11215 INFO

All Comments: [-] | anchor

anonu(2620) 3 days ago [-]

My favorite McKinsey story: Back in 2005 UBS (big Swiss Bank) brought McKinsey in to figure out how to expand more rapidly in the US. The consultants suggested, 'Hey, you have almost no sub-prime exposure. Everyone's trading this stuff! You gotta go in, and in a big way! It will certainly be a pillar of your US strategy.'

So UBS did... The result was one of the most severe debt writeoffs of the Great Financial Crisis.

jankyxenon(10000) 3 days ago [-]

Is the point that McKinsey didn't know the Great Recession was coming / it's causes ahead of time?

starpilot(1917) 4 days ago [-]

If you know anyone personally who works at McKinsey, you know them to be brilliant, humble, hardworking, and unfailingly likeable. Think Pete Buttigieg. It's hard to square this with the company's public reputation in anti-establishment screeds.

boudewijnrempt(4058) 4 days ago [-]

I guess you meant this to be sarcastic, right? Doltish, vainglorious, lazy and unfailingly boorish is what comes to my mind when I review the McKinsey people I've met.

vincent-toups(10000) 4 days ago [-]

These qualities are not incompatible with being hatchet-men or giving shitty advice.

txcwpalpha(4208) 4 days ago [-]

There's a lot to unpack here, but the general gist is that McKinsey is overpaid for mediocre services. It's hard to argue against that, and McK in particular does have some insanely unjustifiable rates, but I do think the author has a bit of a misunderstanding of why consultancies are hired in the first place.

To be clear, when a consultancy like McK puts a 23 year old analyst on a project, the $50k/week bill isn't just paying for the knowledge in the analyst's head. The analyst is really just a vehicle to deliver the expertise of the actually-experienced consultants and knowledgebase of the entire firm. The experienced consultants don't have the time to sit and write out the powerpoint deck, so instead they throw as much knowledge as they can at the analyst and the analyst is the one who synthesizes it into something digestable by the customer. Is it worth $50k/week? Fuck no. But it is at least worth recognizing that you are getting more expertise than a sole bachelor's degree.

Second, as someone who has both worked as a consultant at major companies like McK and worked on the other side of the table, and as someone who has long despised the consulting profession because of its overpriced bullshit, I think the author would be surprised at how effective even a measly 23 year old can be. As a consultant, it would drive me insane that my company was billing me out at $700/hr to help implement an IT system when all I was actually doing was reading and regurgitating the software's documentation and making sure that the client didn't ignore it. Surely the client could do this themselves and save the $700/hr, right?

It took me a long time to realize that the answer to this question was actually no. The typical HNer will probably be surprised (and saddened, if you're like me) to realize that the average corporate worker bee is in fact not competent enough to do the most basic tasks, such as refer to documentation, without hand-holding. It is an unfortunate reality that sometimes a well-educated and well-vetted 23 year old analyst can do things much better than an 20-year industry veteran. Companies do know this, and this is what they are paying for, much to the chagrin of the rest of us.

As a last note, the author mentions the possibility of just hiring an IT professional to do the work instead of hiring a consultant. This is something I also see come up a lot in consulting discussions. The reason this isn't done is because hiring an employee full-time brings on a lot of risk to the hiring company. It can take months to hire someone, many more months to train them, and then if they don't perform up to snuff, it can take years to build up a case to fire them, all the whole they are sitting on your payroll taking up budget. A consultant, on the other hand, can be hired in a day, trained in a week, and if needed, can be fired in a minute. You pay a premium for the agility to hire and fire them quickly, but that's the entire point.

mips_avatar(4109) 4 days ago [-]

It seems like people do really well for themselves after they leave McKinsey, do you think this is mostly a function of the cache having McKinsey on your resume brings? What skills are really developed as a management consultant?

teddyuk(4206) 4 days ago [-]

What I took from it is that although it looks like it must be corruption, it is simply incompetence.

creaghpatr(3768) 4 days ago [-]

I agree especially on the implementations front, when success or failure of a huge implementation project hinges on a single (or small team) of full-time employees, it's often better to just pay a premium to consultants to minimize the risk of a botched integration and have some written recourse, rather than firing an incompetent team and suffering the consequences of a botched project at the scale of a large company.

thanatropism(3955) 4 days ago [-]

> The analyst is really just a vehicle to deliver the expertise of the actually-experienced consultants and knowledgebase of the entire firm.

This will sound stupid but MBB have industry benchmark spreadsheet databases that are a significant competitive advantage against our smaller firm.

jariel(10000) 4 days ago [-]

Can concur with this.

The rank and file of many orgs have no thoughts of their own.

But the really shocking thing is how executive level management as so many organisations can't either. Most execs have no idea what their strategy is, they're just the 'responsible people who are in charge'.

FYI in some cases, 'the Firm' may actually have deep knowledge in a particular area - such as 'fast moving goods' or whatever . So the company is actually 'buying' the exposure the consultants will have had to other folks in their industry.

But the prices are boondoggle insane, out of touch with reality.

Scoundreller(4218) 4 days ago [-]

> that the average corporate worker bee is in fact not competent enough to do the most basic tasks, such as refer to documentation, without hand-holding.

I think a lot of this ability is selected-out or outright discouraged in a lot of environments.

If the new employee starts fixing broken things, they're often crushed for stepping out of line.

And god help them if the problem becomes worse (caused by them or not) when it wasn't their job to touch it.

New employee: why do we do X in Y way?

Veteran: because that's the way we always did it

Reminds me of when I made my own way-finding signs for visitors. While my director loved it and wrote me a thank you card, that didn't stop janitorial from ripping them down. While they were 100% helpful, they were 100% unapproved.

I still oil door hinges and clean some of the uncleaned areas myself. And fix other things that aren't "my problem", but it's a fine line.

mac01021(3957) 4 days ago [-]

> The analyst is really just a vehicle to deliver the expertise of the actually-experienced consultants and knowledgebase of the entire firm. The experienced consultants don't have the time to sit and write out the powerpoint deck, so instead they throw as much knowledge as they can at the analyst and the analyst is the one who synthesizes it into something digestable by the customer. Is it worth $50k/week? Fuck no. But it is at least worth recognizing that you are getting more expertise than a sole bachelor's degree.

How much do those more experienced analysts make per week? I would think $50k per week would be enough to buy me the undivided attention of _all_ of whatever team of people at the consultancy has the expertise I need.

throwaways66722(10000) 4 days ago [-]

Current McKinsey. Former 23 year old.

I would take a different perspective. The issue is not 23 year olds being better than others. It's the fact that the 23 year old from McKinsey comes in with incredible political capital equivalent to whoever hired him at the top, and that enables him to be multiple times more productive than someone else. Most organizations are... slow.

WhompingWindows(4217) 4 days ago [-]

Pete Buttigieg worked at McKinsey after his college days...which was only 10-15 years ago. I wonder if/how he was exposed to unethical practices and grafting like this...

sjg007(10000) 4 days ago [-]

If he was I expect he understand the issue and as President would work with Congress to reform the practice.

abeppu(3702) 4 days ago [-]

On glassdoor it seems like that recent grad business analyst is taking home ~100k [1]. One of the articles earlier this week about McKinsey as 'Capital's Willing Executioners' [2] talked about how the firm spreads a specific capitalist ideology.

Given that (in this instance), the company is charging a 30x markup for the labor of that recent grad business analyst ... should we be thinking of McKinsey analysts as misguided exploited workers?

[1] https://www.glassdoor.com/Salary/McKinsey-and-Company-Busine... [2] https://www.currentaffairs.org/2019/02/mckinsey-company-capi...

tedsanders(3867) 4 days ago [-]

I worked as a management consultant. When my firm billed my time out at ~$10K/day and paid me ~$150K/yr (a ratio of ~20:1), I didn't feel exploited. That ~$10K/day covered way more than just my time.

Here's what that billing had to cover:

* My normal work hours [the thing we billed for]

* My overtime hours

* My annual bonus

* My benefits & retirement savings & overhead & employer-side taxes

* My time spent on business development & sales (most of which don't pan out)

* My time spent on discounted or free projects performed to seed future sales

* My time spent in training & development (probably ~4 weeks in the first year)

* The time of managers spent training & developing me

* My support staff (industry analysts, Powerpoint designers, internal experts)

* Multi-million-dollar IT investments to build tools and databases to support me in my work

* A couple first-class flights a week

* A few nights a week at expensive hotels

* Lyfts & Ubers everywhere

* All meals expensed

* The firm's partners working the case

* New employees for whom we don't bill

* Fees for market reports

* Fees for expert interviews with industry executives (in some cases we'd do dozens of interviews a week, paying ~$1K/hr each)

* Office overhead (rent, power, insurance, etc.)

* Office support staff (HR, finance, janitors, etc.)

* Profits for the firm's equity holders

(Now a few of these expenses were billed separately I believe, but it shows the wide breadth of costs that a firm has to cover.)

None of this opportunity would have existed without the firm, the network, the brand, and the partners. It's amazing to think that all I had to do was show up and work, and in exchange, I would get paid a high salary, get experience working with senior executives, get a prestigious brand on my resume, and accelerate my career relative to most career tracks. I don't regret the job, nor do I think it's surprising that there are crowds knocking on the door trying to get in.

Looking at other industries, I wouldn't be surprised if a cashier at a grocery store made less than 5% of what they put in the register each day. And I wouldn't be surprised if a oil driller got paid 5% of oil that they extracted each day. Even if we're the ones touching the revenue, we're still a small part of what it takes to run a big business.

markhall(3541) 4 days ago [-]

While I understand the frame of thinking, you'll have a difficult time convincing any advocating party that someone making $100K+ is 'exploited'. Multiple examples probably can be found across a number of industries

code4tee(3751) 4 days ago [-]

Have been on the receiving end of several McKinsey engagements.

The work itself was generally not all that good. The knowledge of "experts" brought into meetings rarely contributed more than what a reasonably intelligent person could dig up on Google search results in an hour. They were also often farmed out on random staff augmentation functions that just annoyed the hell out of people. "Hi I need you to fill out this excel spreadsheet with 35 columns so we can put a presentation together... oh and if you could do that by 6 PM tonight that would be great." That sort of nonsense so they could produce some nonsensical 50 page PowerPoint deck that nobody read.

There were a few decent people there but by and large value was not generated. As others have pointed out a major motivator seemed to be to provide some C-level exec with CYA coverage to claim that programs being implemented were based on the advice of outside "experts."

In the two main cases I saw the McKinsey strategy ended up being a total disaster that seriously damaged the company and the C-level exec that hired them in both cases got canned as a result so in the end even the CYA concept didn't really work.

choppaface(4108) 4 days ago [-]

Have worked with two ex-McKinsey consultants (hired as employees) and want to echo the observation that they tend to use their fast execution to generate excessive BS like large spreadsheets to win office politics.

In one case, the guy had built a small org to fulfill a need that we had previously outsourced to a vendor. The company was thinking of signing a large contract with the vendor and he created a barrage of 20-page spreadsheets to try to justify his budget. He made me sit with him for an hour to extract random cost estimates for one of his sheets. The vendor ended up turning into a Silicon Valley unicorn... exactly the sort of opponent he wanted I guess.

I think the trouble with McKinsey is they breed competitors who have never seen working products. If a product works, there's probably a clear vision and a clear definition of value. When products are broken, winning office politics looks valuable, hence the McKinsey fascination.

edelans(4193) 3 days ago [-]

CYA = Cover Your Arse

(I just found about that one, sharing if it can help others)

surak(4027) 4 days ago [-]

McKinsey have messed up government policies all over the world when it comes to McDonaldization of Society. They are using simple measurements to model how complex societies should work, unfortunately this is only works for McKinsey and their partners.

gwd(10000) 4 days ago [-]

Sounds a bit like a lot of 'machine learning' actually (when it's done blindly).

But do you have any specific references for the kind of thing you're talking about? It sounds plausible, but if I had a dollar for every time something plausible-sounding was completely wrong...

Vaslo(1607) 4 days ago [-]

Had a lot of experience with McKinsey doing projects at my company at different levels I've been in throughout it the years. When it's highly technical things like implementing a new technology you know nothing about or a new manufacturing footprint that is tangible to perform, they do good work. Anything "Strategic" or fluffy I find a waste of time. They are excellent salespeople and do a lot to calm the C-suite but I rarely see these fluffy items driving employee happiness or the bottom line after spending millions.

mdorazio(10000) 4 days ago [-]

FYI that's likely due to there being different organizations inside McKinsey (and all the big consultancies) with very different goals and employees. Technical implementation work usually comes out of a different org than management/strategy consulting.

hogFeast(10000) 4 days ago [-]

A question I have asked myself a few times as the guy who has to work out whether to invest in the guys hiring McKinsey: if you need McKinsey to do 'technical' work for you then why am I giving you money?

Strategy is very useful...but, again, McKinsey know almost nothing about this too. In my experience, capital allocators are born not made. The idea that you can be a strategic genius without ever taking any risk yourself (i.e. the kind of person who goes to the colleges McKinsey hires from) is just wrong.

And the funniest thing about McKinsey is they are, apparently although not so much anymore, arch capitalists but largely exist to milk money from the principal-agent problem. McKinsey is the problem.

mumblemumble(10000) 4 days ago [-]

Doesn't seem surprising. If it's implementing a specific technical process, there are measurable outcomes that must be met, and your client will know whether or not you met them.

Senior leadership contracting an outsider to tell them how their own industry works and explain their place in it, though, is right up there with asking the Psychic Friends Network for career advice. They're just begging to be taken for a ride.

cafard(10000) 4 days ago [-]

I don't remember seeing HN articles about McKinsey before this month or last. Now there are three articles about McKinsey on HN, including one that says that the Houston Astros' sign-stealing exemplifies the decline of a McKinsified America. (Or maybe it's the Astros that are M'd.)

How did it become the flavor of the month?

Nicholas_C(4219) 4 days ago [-]

The Astros' current general manager Jeff Luhnow worked at McKinsey before he got into baseball.

mayneack(1786) 4 days ago [-]

McKinsey has been generally in the news of late as Pete Buttigieg has gotten more traction in the polls because that's that he credits for much of his experience. Its probably not HN specific.

sincerely(4210) 4 days ago [-]

Presidential candidate Pete Buttigieg has been relatively outspoken about his time working at McKinsey (although he can't discuss specifics due to an NDA), which may have sparked a wave of 'what does McKinsey actually do'

kevinconaway(4006) 4 days ago [-]

Likely due to the recent NYT article that detailed how McKinsey helped the Trump administration carry out its immigration policies[0].

[0] https://www.nytimes.com/2019/12/03/us/mckinsey-ICE-immigrati...

mdorazio(10000) 4 days ago [-]

Really? I see McKinsey posts come up at least once a quarter. Just do a google site search on HN for 'mckinsey'.

wil421(4074) 4 days ago [-]

McKinsey comes up because they are an elitist consulting firm and HN is somewhat of an elitist forum. I am using elitist in a good way, although there's certainly elitist attitudes on here.

As someone who might join a Big 4 in IT consulting soon it's nice to see what McKinsey is up to. It's much nicer to see accountability for one's actions.

theslurmmustflo(3854) 4 days ago [-]

Pete Buttigieg is surging, McKinsey is in the air.


sjy(4219) 4 days ago [-]

Matt Stoller has been pumping out issues about government contracting over the past few months, and they've all been popular on HN. McKinsey is the archetypal management consulting firm and the role of consultants is an important part of any discussion about the concentration of market power. Often comment on these issues concerns the tech companies many HN readers work for.

SigmundA(10000) 4 days ago [-]

I would like to know too, thought it was Baader–Meinhof effect.

faizshah(4086) 4 days ago [-]

Matt Stoller wrote a really interesting book giving a history of monopoly power in America: https://www.amazon.com/Goliath-Monopolies-Secretly-Took-Worl...

On my winter reading list :)

ak217(3620) 4 days ago [-]

I plan to read it at some point, but I was really turned off by Stoller's relentlessly cloying self-promotion in his newsletter. I get it, he writes well on important topics, but he could use a bit more subtlety in pushing his stuff.

pkilgore(10000) 4 days ago [-]

It's really good!

Wordball(10000) 4 days ago [-]

Thanks for the recommendation, I didn't read the article either.

z3ugma(10000) 4 days ago [-]

I was a technical project management consultant for a long time. The value that most orgs get from a consultant isn't really in the advice the consultant gives them, it's the political cover to make changes they knew they should make all along, but didn't have the social capital or the focus to make those changes until they had a person in a chair across from them.

sbuttgereit(2255) 4 days ago [-]

I have been on the 'client' side of consulting engagements and a consultant myself. I think you identify the central truth of consulting: that usually we're not bringing any magic knowledge to the table.

However, I think there are other circumstantial values that appear alongside the one you identify. I find that consultants not only provide cover, but also provide a level of focus on non-immediate, but important, problems that doesn't otherwise materialize on its own. Nobody wants to be seen wasting the highly paid consultant's time so you often get people paying attention when they might not otherwise. There's also setting the stage for what ends up as a sort of professional group therapy sessions and with the consultant as a mediator. I think you touch on that, but maybe I see some greater emphasis on that bit.

taude(3958) 4 days ago [-]

Yeup, this. I worked at a former company who paid 7 figures for a re-org (lay-offs), and it was run mostly by 25 year olds, with the occasionally pre-recorded video by a big-co partner. Was really interesting dynamic that felt more like 'Up in the Air'. But it was obvious they did it for liability reasons and so they wouldn't have to take the blame for the lay-offs since it was the consultants who told them findings.

crusso(10000) 4 days ago [-]

it's the political cover to make changes they knew they should make all along

No argument here. Every single time that I've been in an organization that has brought in management consultants, the results of their work were almost exactly in line with advice that we had been feeding upper management for some time.

It's infuriating to have the right answers to solve problems while people at the top ignore good advice and then spend huge amounts of money to get that good advice repeated to them by other people.

Even worse is when upper management acts like the consultant advice is the first time they've heard those recommendations.

globuous(3845) 4 days ago [-]

This !

In france, its a classic move by big companies to have mbb formalize what they already know: a need to restructure which unfortunately involves firing people. Its easier to justify restructuring because mbb said so than because the company says so for some reason ^^

greggyb(4185) 4 days ago [-]

An additional aspect is focus. By being expensive and not part of the normal reporting hierarchy, a consulting firm working on a project can have the luxury of focusing on just that project. This is a huge benefit.

organsnyder(4211) 4 days ago [-]

That's also Epic's (the healthcare software vendor) big value proposition. Their software is okay—among an industry filled with shit, their turd is at least polished.

But their delivery model is incredible: they make customers woo them (shouldn't vendors be doing the wooing?), and they have a prescribed way of doing things if they decide you're worthy to purchase their product. What you get in return is a constant cudgel of 'this is the Epic way' to yield when talking with practitioners who are used to doing things their own way (especially doctors—nurses are more flexible and pragmatic, in my experience). This can be especially useful when you have a system of multiple facilities that's grown through acquisitions, with each facility having decades of accumulated practices that aren't quite aligned with your other locations.

lazyasciiart(10000) 3 days ago [-]

That's not always the case. Sometimes they bring a consultant in with no intention of doing anything that gets suggested, so they can point to how much serious effort and money they spend on the topic.

bduerst(10000) 4 days ago [-]

One of my profs in business school said that 1/3 of the time you hire consultants is because you know what you want to do but you need a fall guy in case it goes south. You're paying a premium to offload the liability.

Wash, rinse, repeat with the big 4.

ownagefool(10000) 4 days ago [-]

I don't know if this is always the case.

High end consultancies have sold oursourcing that decimated the UKs IT orgs, and are now selling in-sourcing, whilst in reality many are just selling outsourcing to them with new buzzwords like DevOps and microservices.

It might be what the businesses wanted to do all along, but I'm not sure it's what they 'knew they should be doing'.

servercobra(10000) 4 days ago [-]

Yup, definitely true. I just got paid a fairly handsome sum for a one week gig at a large organization to write a report about a technical change. It was clear what the outcome was supposed to be and that the manager (who was just taking over some new orgs) already knew what they wanted to do, and just needed an external report to both bolster their decision and CYA. Luckily, what the manager wanted to do is also what I would have recommended, or it could have gotten a bit awkward.

maxkwallace(10000) 4 days ago [-]

On the positive side, I do think that if we repeat this truth enough times we can decrease the 'cover my ass' value of consulting and help more orgs make change autonomously.

timwaagh(10000) 4 days ago [-]

That's just one type of consulting. Perhaps a very important type but governments tend to hire them for far more mundane reasons as well: government workers at some places rarely (never) get fired so it's more efficient to hire a consultant for a while instead of being stuck with a lifer even if the consultant is four times the cost.

sjg007(10000) 4 days ago [-]

I don't subscribe to this viewpoint. You have to have the clout to bring in the consultant in the first place. Now it may be a buy vs build decision and you are looking for advice on the buy side but I don't see it being about political cover when your 'reputation' would be tied up with the consultant.

ndarwincorn(10000) 4 days ago [-]

This entirely misses the forest for the trees. And granted the author's bias gets in the way of his attempt to point out that forest. Their bias aligns with mine, but with less editorialization IMO the following would be more clear:

It's not that the work is or isn't valuable, or where that value is. It's that McKinsey charges 72% more than a competitor for the same consulting, and that there's a massive perverse incentive for public servants in charge of awarding contracts to pick the most expensive one.

From the article, bias left in:

Back in August, I noted that McKinsey's competitor, the Boston Consulting Group, charges the government $33,063.75/week for the time of a recent college grad to work as a contractor. Not to be outdone, McKinsey's pricing is much much higher, with one McKinsey "business analyst" - someone with an undergraduate degree and no experience - lent to the government priced out at $56,707/week, or $2,948,764/year.


And this gets to the second reason why McKinsey can charge so much, which has to do less with McKinsey and more with an incentive to overpay more generally. It's more likely something called the 'Industrial Funding Fee,' or IFF. The GSA's Federal Acquisition Service gets a cut of whatever certain contractors spend using the GSA's schedule, and this cut is the IFF. The IFF is priced at .75% of the total amount of a government contract. In the case of McKinsey, since 2006, "FAS has realized $7.2 million in Industrial Funding Fee revenue."


Does McKinsey do a good job? The answer is that it's probably no better or worse than anyone else. I'm sure there are times when McKinsey is quite helpful, but it's in all probability vastly overpriced for what it is, which is basically a group of smart people who know how to use powerpoint presentations and speak in soothing tones. You can just go through news clippings and find areas McKinsey did cookie cutter nonsense. For instance, McKinsey helped ruin an IT implementation for intelligence services. In the immigration story, MacDougall shows that the consulting firm encouraged ICE to give less food and medical care to detainees. That's cruelty, not efficiency.

dpeck(1123) 4 days ago [-]

Exactly. Early in my career I learned that a consultant is almost never hired by an organization, a consultant is hired by a faction within an organization who already knows what they want the deliverable to be.

gowld(10000) 4 days ago [-]

This is why consultants hire Ivy League graduates. To create a myth of 'Ivy League' 'quality' justifying the decisions they are hired to recommend.

The other thing management consultants do is industrial espionage -- they go into a company to learn how it work, and then advise other companies on how 'industry leaders' operate.

crazygringo(3841) 4 days ago [-]

It's exactly this.

Within any organization, you can have 3 or 6 important people pushing in completely different strategic directions. And it can be clear to any unbiased observer that there's only one good, responsible choice, but within your org that gets met with essentially 'but that's just your opinion, man'.

So you call in a consulting firm and they deliver what most of the people already know. The consultants don't need to be rocket scientists.

But afterwards, nobody can say it's just your opinion any more. It's now expert analysis that you paid good $$$ for, and everybody who disagreed before now basically has to get on board.

So it's not even necessarily political 'cover', but almost like a referee that brings enough credibility to settle otherwise intractable internal disputes.

Also, this lets the CEO appear unbiased to all the people who 'lost'. So they can get on board with the new policy but not feel like the CEO shot them down personally, which is bad for morale, can lead them to quit, etc. (Just because they believed in the wrong strategy doesn't mean they still can't be super-valuable in the future in executing the right strategy.)

Icathian(10000) 4 days ago [-]

Absolutely this. It baffles me how many people can understand and repeat ad nauseum 'nobody ever got fired for buying IBM' but don't get the role a consultant plays in corporate strategy.

reallydontask(4091) 4 days ago [-]

it's the political cover to make changes they wanted to make all along

There is a difference, sometimes subtle sometimes not so

chaostheory(224) 4 days ago [-]

If we're including the other consulting firms, it's not just that. These companies effectively also function as a contracting platform ie a "AWS" for on demand employees instead of servers

skydoctor(4203) 4 days ago [-]

This calls for a P2P model of consultancy then: Company A acts provides neutral consultancy to Company B (in a unrelated field) and Company B or C provides neutral consultancy to Company A.

braythwayt(956) 4 days ago [-]


I have done lots and lots and lots of enterprise sales and consulting, and a big part of bringing in outsiders is to provide the illusion of social proof.

'Well, we had the experts in, and they found...'

That sounds really, really terrible when put so cynically, but the flip side of that is to view it as insurance. If you bring the consultants in, hoping that they will recommend Plan A, and Plan A is truly terrible, a reputable consultant will find a way to sell you on Plan B, by couching it as 'A few adjustments to Plan A.'

So in effect, yes, it is about making changes management wanted all along, but in addition to providing the illusion of social proof, you can also get an extra set of eyes to make sure that you don't completely footgun yourself.

Sometimes. Maybe. If the consultants are good at both analysis and selling management on adjusting their plans...

CharlesColeman(4211) 4 days ago [-]

> it's the political cover to make changes they knew they should make all along

Or the political cover to make the changes they wanted to make all along.

mathattack(464) 4 days ago [-]

Same as "nobody got fired for hiring IBM"

thinkloop(3571) 4 days ago [-]

There's another piece to it too: they are able to legally sell trade secrets between organizations. They update their 'knowledge base' every engagement and become 'specialists' in the field.

dsfyu404ed(4095) 4 days ago [-]

TL;DR organizations respond to incentives and the incentive laid out in the legislation for the organization responsible for setting prices the government pays is to spend money, not save it and McKinsey takes advantage of that.

Considering that a big player like McKinsey can't systemically overcharge the government and get away with it due to how noticeable it would be in terms of sheer dollar amount there are likely be other smaller players that are making even more absurd profits on their services and I wonder who they are since IMO their existence is more worthy of investigation than some BigCo walking right up to but not over the line drawn by the law (as BigCos do). This is not to say that the incentive structure shouldn't be changed to categorically prevent all this.

rootusrootus(4222) 4 days ago [-]

Okay, I'll ask the next question. How do I get to be one of those smaller players. Ha!

xvector(2915) 4 days ago [-]

From the linked article[1]:

I find it absolutely INSANE that highly specialized organizations like the NSA rely on McKinsey to direct their future operations.

These people have intimate knowledge of their organization and culture, shaped and hardened by years of challenges and learned experiences. What can they possibly think bringing an outsider in to do a complete overhaul will solve!?

Why can they not apply their own experience to solve problems in their own organization!? The proliferation of consulting firms baffles me. It seems like you're just paying people for critical thinking.

Unrelated: All of my classmates that went into consulting joke about how absurd the field feels to them.

[1]: https://www.politico.com/story/2019/07/02/spies-intelligence...

Aunche(10000) 4 days ago [-]

The director of the NSA gets a modest salary of $180,000. That's roughly the comp of an associate at Mckinsey or entry level at a FAANG. I doubt that they can attract the top talent with the amount they're paying.


dv_dt(4111) 4 days ago [-]

The organization may not need them, but the (at some level appointed) leadership may want to do something despite the organization...

Spooky23(3656) 4 days ago [-]

Easy, McKinsey and their ilk give zero shits about office political stuff, just the principal paying the bill.

In some orgs, people working in the org cannot defy certain non-cooperative people. The consultants can come in, demand the data they need, and immediately escalate to the principal.

irrationalactor(10000) 4 days ago [-]

Having worked in the field for a few years after graduation myself, I've come to think of 'strategy consulting' as mostly bureaucratic arbitrage.

8 times out of 10, the organization knows exactly what needs to be done. However, big organizations have thousands of stakeholders with competing desires. Steering the ship in a different direction can have negative consequences for thousands of people.

Due to this problem...even the people in charge struggle to do what needs to be done without being pitchforked out of the place.

BCG, McKinsey, etc. provide the perfect cover. Their mythical brand image and powerpoint sales abilities allow management to do what needs to be done, while offloading the risk onto a 3rd party. If the changes don't work out, you can fire the consultants! The management was just following the advice of the outside experts.

zaptheimpaler(3981) 4 days ago [-]

> In the immigration story, MacDougall shows that the consulting firm encouraged ICE to give less food and medical care to detainees.

I guess this is one big reason companies hire consulting firms - a consultant can act as a mouthpiece to say things you want people to hear, but not say yourself. A kind of moral laundry, absolving insiders from blame. Similarly they could recommend a risky business plan that someone believes in but doesn't want to stick their neck out for - if it fails, blame the consultants and move on.

txcwpalpha(4208) 4 days ago [-]

>It seems like you're just paying people for critical thinking.

That's exactly what it is. Unfortunately most organizations, probably even ones like the NSA, really struggle with doing any kind of critical thinking on their own. This is especially true when trying to decide on a future direction for the org: the org itself is most likely biased to just doing things the way it's always been done. In that regard, having an outsider's perspective, especially one that has also done work at other similar organizations and has taken note of all the various ways other orgs do things and knows what works and what doesn't, can be quite beneficial.

riazrizvi(10000) 4 days ago [-]

I'm sorry but no case is made to support the idea that taxpayers are buying generic advice. Let's agree that the service buys a fresh college grad assigned full time. The grad might be a customer contact collecting and organizing customer issues before presenting them to internal subject matter experts, and then presenting those findings back to the client. I'd like to hear evidence to support how the service is fresh college grads simply using what they learnt in college and their chutzpah, which is what the article implies.

austinhutch(4218) 4 days ago [-]

> The grad might be a customer contact collecting and organizing customer issues before presenting them to internal subject matter experts, and then presenting those findings back to the client

This might describe how technical consulting is done at big firms where the output is a product, but when the output is strategic documents... this is grunt work done by the front line analysts with a generic playbook. There is no 'behind the scenes' strategy work like there is for dev. Maybe there is a lower analyst or intern that doesn't interface with the customer.

MFLoon(10000) 4 days ago [-]

Sort of a buried lede here. There's nothing shocking about overpriced, underdelivering consultants. But the bit about how the GSA is essentially profit sharing with said overpriced consultants at the expense of the rest of the government and taxpayers, via the IFF incentive structure, is pretty mind blowing. Another brilliant financial 'innovation' by the Clinton Administration that's been quietly burning billions of taxpayer dollars to the end of significantly less efficient government for decades now...

TrackerFF(10000) 4 days ago [-]

McKinsey, Bain, BCG, et. al. are all extremely close to state and industry in all countries.

I'm from Norway, and every now and then someone will publish articles about the governments exorbitant use of consultants (McKinsey at that) for seemingly menial tasks which should be handled internally.

Truth is, these management consulting firms have some very good advantages to keep their business model going:

1) Their alma matter is spread all over the world, usually in the upper management.

2) They get highly detailed information from all sides, and can tweak their best practices accordingly.

3) They take the blame, so as others have mentioned, they're basically a million $ CYA insurance for both politicians and executes alike.

You could remove incentives like those mentioned, but I'm sure they'll find some other way to make money. They're so tightly integrated, and in so many places around the world, they probably have hundreds and thousands of strategies up their sleeves.

l5870uoo9y(4197) 4 days ago [-]

It is not only a US pattern, go to every major European capital and the consultancies will be located right next to the ministries.

hwbehrens(10000) 4 days ago [-]

The IFF referenced in the article seems like a great example of how good-intentioned changes to incentive structures can have very warped outcomes, potentially years later. These kinds of effects keep popping up for me, even in industry contexts (e.g. stack ranking).

Are there accepted mechanisms for systematically identifying these knock-on effects, and if so, what are they and how can they be more broadly applied? How many 'hops' of influence can you get away from the change before the effects are impossible to predict?

Or does it just boil down to 'ask very smart domain experts to think about the problem very hard for as long as you can afford to pay them'?

beaconstudios(4093) 4 days ago [-]

the general topic would be systems theory / systems thinking. Lots of little scraps of knowledge of these kinds of knock-on effects are scattered around in various fields and disciplines, but I don't know of anywhere where they're gathered in one place for broad consideration or application. Not yet, anyway.

drak0n1c(4177) 4 days ago [-]

That's already in practice - there are dozens of competing think tanks that churn out pretty deep critiques of legislation before it comes up for a vote. Whether for better or worse instead of the legislation being edited accordingly the criticisms are usually dismissed as 'Republican/Democrat talking points'.

codesforhugs(10000) 4 days ago [-]

The fundamental problem with incentives is that they're asymmetric in nature: The incentivee has a lot more time (and direct motivation) to come up with a way to game the incentive than the incentivizer can spend when setting it.

The only real way to address that is to revise incentives on a frequent and regular basis — but who wants to do that? Certainly not legislatures.

smacktoward(41) 4 days ago [-]

In terms of lawmaking, the usual way to deal with this problem isn't with front-end analysis, but rather by adding a 'sunset provision' (see https://en.wikipedia.org/wiki/Sunset_provision). That's a clause in a bill that requires it to be re-authorized periodically in order to stay in effect. If a law with a sunset provision ends up causing unintended consequences, then lawmakers can let it die just by doing nothing. That's an easier lift than you get in a bill without a sunset provision, which can only be killed if you can convince a majority of lawmakers to actively kill it.

Sunset provisions aren't nearly as widely used as they probably should be.

mac01021(3957) 4 days ago [-]

I might just be dense, but I don't fully understand how the IFF was supposed to work. What was its intended effect? Is it just a tax that the contractors have to pay on spending the government's money, to disincentivize them to spend that money?

tootie(10000) 4 days ago [-]

The piece I don't get is why the agencies didn't revolt. I get how it's a warped incentive for the GSA but isn't the GSA working at the behest of agencies like ICE? Shouldn't ICE be up in arms that their budget is being squandered?

iudqnolq(10000) 4 days ago [-]

The key quote this is responding to:

> ... less [to do] with McKinsey and more with an incentive to overpay more generally. It's more likely something called the 'Industrial Funding Fee,' or IFF. The GSA's Federal Acquisition Service gets a cut of whatever certain contractors spend using the GSA's schedule, and this cut is the IFF. The IFF is priced at .75% of the total amount of a government contract. In the case of McKinsey, since 2006, "FAS has realized $7.2 million in Industrial Funding Fee revenue."

> ... The IFF also incentivizes the GSA to get the government to outsource to contractors anything it can, simply to get more budget. The IFF has been creating problems like the McKinsey over-payment for a long time. In 2013, the GSA Inspector General traced a similar situation with different contractors. Managers at GSA overruled line contracting officers to raise prices taxpayer pay for contractors Carahsoft, Deloitte and Oracle. Government managers at GSA micro-managed and harassed their subordinates and damaged the careers of contracting officers trying to negotiate fair prices for the taxpayer.

> How did the GSA get such a screwed up incentive...?... [T]o become more entrepreneurial as part of its "Reinventing Government" initiative, Bill Clinton's administration implemented the Industrial Funding Fee structure. It worked in generating money for the GSA ... so well that Congress's investigative agency found in 2002 that the GSA stopped having to rely on Congressional appropriations.

TheSoftwareGuy(10000) 4 days ago [-]

>Are there accepted mechanisms for systematically identifying these knock-on effects, and if so, what are they and how can they be more broadly applied?

Yes, look for cases where A is rewarded, but B is expected:


oefrha(4042) 4 days ago [-]

Title's terrible, I read it as taxpayers pay $3M a year per U.S. college graduate. It's not even the original title, "Why Taxpayers Pay McKinsey $3M a Year for a Recent College Graduate Contractor."

CharlesColeman(4211) 4 days ago [-]

The actual article title is 'Why Taxpayers Pay McKinsey $3M a Year for a Recent College Graduate Contractor,' don't know if it was edited or the poster just chose something different.

ativzzz(10000) 4 days ago [-]

I've noticed a lot of posts on HN recently get their titles changed after posting, and this one will as well because it is quite incorrect as you pointed out.

thorwasdfasdf(10000) 4 days ago [-]

from the article >>> 'In other words, the agency of the government in charge of bulk buying isn't paid for saving money, but for spending too much of it. The IFF also incentivizes the GSA to get the government to outsource to contractors anything it can, simply to get more budget. The IFF has been creating problems like the McKinsey over-payment for a long time.'

hef19898(3098) 4 days ago [-]

And yet there are people refusing to accept that fact, I met one of them. And that guy also failed to realize what an immense cash flow advantage government has against private sector companies. Government wants to spend money as soon as possible. That alone is a huge lever in procuring hardware. Not sure why they don't use it.

jshaqaw(10000) 4 days ago [-]

Former McKinseyite here (and one reasonably skeptical of the firm as a whole)... nobody hires McKinsey for fresh college grads. That's just something they tell fresh top of class Ivy League college grads to make them feel important. Clients hire Directors to give them peer level strategic counsel. How they technically allocate payment for time across consultants is just backfillling to get to a number already agreed to in advance.

mdorazio(10000) 4 days ago [-]

This is slightly twisting what you actually get, though. You want the Director-level time (and that's what is advertised during the sales process), but realistically you get a team of largely inexperienced kids supported by the person you actually want, who gives you a few hours while billing for the whole team to meet their blended rate target. In a lot of cases it's become a bit of a weird situation where the strategic guidance is almost free if you sign up to pay for a team of 20-somethings to follow a playbook and make some decks.

bkor(3137) 3 days ago [-]

I've worked with various from McKinsey. The work is not that impressive. It's like having an MS Office expert, especially Excel and Powerpoint. The person is able to work long days, so after a workshop they spend a huge time to make it look nice again. Though one told me they have a separate department for quickly making presentations look 'nice'. Meaning, the consultant just sends it off to that other department.

Reading how expensive they are I'm glad McKinsey isn't used anymore.

The major reason they're hired is because of the McKinsey name (convincing others in your company). Not so much for anything else IMO. Maybe you we're doing great work but that's not the ones we saw (I saw various).

jshaqaw(10000) 4 days ago [-]

By the way, this is not to denigrate the analysts at McKinsey who were in my time typically very smart, nice, hardworking, etc... They performed an important data gathering function. But the bill for their time has the sort of relationship you get when a Legal Assistant at Cravath photocopies stuff in your file. No they aren't "worth" $500k/hour or whatever rate they bill at these days to photocopy but that isn't the point. You are paying for Cravath and how that Cravath fee is broken down is largely irrelevant.

iudqnolq(10000) 4 days ago [-]

I'm trying to parse the phrase 'peer level strategic counsel'.

Does it mean advice as good as the advice we're giving other firms like you?

gcbw3(10000) 4 days ago [-]

And let's not forget the outcome of those illegally overpriced contracts: https://www.propublica.org/article/how-mckinsey-helped-the-t...

turk73(10000) 3 days ago [-]

Laws on the books got enforced?

I don't get all the bleeding heart nonsense about deporting illegals. The reality is that a certain element of the population commits most of the crimes and makes life a living hell for citizens trying to go about their lives. It is the duty of our government to follow the law and follow through with helping legal CITIZENS maintain their living standards. Failure to do so both de-legitimizes government as well as turns our nation into a chaotic mess, something I would prefer that we avoid.

I expect the government to enforce the law, not selectively enforce it, but actually enforce it. If we choose not to then we should repeal those laws. There really isn't a lot of middle ground here--selective enforcement is a liberal tactic used to push an agenda. No other country has been as lax about border security as the US has been and no other country has been as generous to the overall criminal element that benefits from lax/corrupt law enforcement. No other country has been so generous with taxpayer dollars spread around to both friends and enemies. We subsidize Europe, we subsidize North Korea and everything in between. When Trump brings up this sad fact, he is excoriated by the media. But that doesn't me he's wrong.

While Americans are paying for the safety and upkeep of most of the rest of the world, generational Americans are being replaced by more pliant newcomers for political purposes while being told our rights don't matter and our needs don't matter even though it is we who pay the taxes to supply those things such as good schools for OUR children. It's falling apart--I'm watching it in our own school district and the especially in the neighboring one where ESL students have overwhelmed the entire system and its ability to budget. All brought to us by well-intentioned, but misguided, profit-seeking Christian churches.

An observation: If the county I live in didn't have to arrest so many foreigners for committing crimes there would barely be any crime here. Also, the majority of child welfare cases relate to one issue: Drug/alcohol abuse. I don't even think there would be as much domestic abuse were it not for substance abuse. And what is causing all the substance abuse? Drugs brought to us by lax border enforcement, on purpose, to meet a demand from hopeless souls whose economic lives have been undermined by corporate offshoring and mass immigration, a double-edged knife stuck right in their backs. People, men especially, are giving up and committing suicide.

If these communists succeed, you won't like your life under them, it will be seriously miserable. Not enough food, no heat in the winter, nothing that they promise will ever arrive. Worse, there will be no one left to ride to rescue. It is a dark future for humanity if America falls and we are falling.

alexchamberlain(3383) 4 days ago [-]

> ... Work of a 23-Year Old

This really bothers me. I understand the point of the article is that they're charging a lot of money for an inexperienced consultant and no bias (discrimination) was meant, however, age is irrelevant. There are a lot of very 'experienced' 23 year olds, as long as you're asking them about the work they've done. There are a lot of people in their 50s with very little experience (as their actual experience is irrelevant to the topic at hand or their actual experience has been very repetitive).

I think we need to be a bit more conscious of unconscious bias towards age (in either direction).

SolaceQuantum(2683) 4 days ago [-]

I fully agree and believe it would be much stronger to say that this is a consultant with 'x months of experience' than '23 years old' which could mean anything from 'worked in this field since I was 16' to 'hired yesterday'.

mixmastamyk(3510) 4 days ago [-]

There's a lot of truth to this, but the fact remains that a 43 year old will have twenty additional years of experience in life and various fields, not merely a narrow subject. This can improve a project's chance at success.

For an example young folks can relate to, rate your current self against self-five_years_ago for an important task.

whiddershins(2572) 4 days ago [-]

By definition a 23 year old can't have 25 years of experience in anything, even tying shoes.

They can't legally have worked full time in the United States for more than 7 years.

Unless they are a wild outlier, they can't have an advanced degree and significant years of professional experience doing anything.

You kinda have a point but sensitivity to age discrimination towards the young in this sort of statement is a bit of a stretch imo. It's also shorthand for absolute realities of time and space.

Edit: Age discrimination also isn't perfectly symmetrical because:

If you're young, you will very likely get to be old one day.

If you are old, you never get to be young again.

tylermw(3899) 4 days ago [-]

> There are a lot of very 'experienced' 23 year olds

By the definition of the word--no, there aren't. You can have talented, capable, and driven 23 year olds, but not 'very experienced.' Given a working adults life spans approximately 40-50 years, a 23 year old claiming that title is laughable (exceptions being musical prodigies and athletes who have trained from childhood).

surak(4027) 4 days ago [-]

In all kindness, let me ensure you that experience is rarely gained until your forties. Early on we may feel on top and hold important positions, but it is not the same thing as understanding the other side. To gain wisdom, not there yet, probably take another 20 years.

MFLoon(10000) 4 days ago [-]

This isn't a matter of unconscious bias. People can (and, arguably, should) have a conscious, intentional bias, against inexperience, when it comes to selecting for skilled work. Age drops off as a useful proxy for experience once someone's been in the workforce for some time, but as others point out, a 23 year old is still going to be definitionally inexperienced in most domains.

wgerard(10000) 4 days ago [-]

I think the point is more like:

It's hard to imagine a recent college graduate understanding the machinations of corporate entities enough to give meaningful advice about how to run more 'efficiently'. It's a bit like when a SV company is promising to revolutionize farm equipment and the founders have no background in agriculture - weirder things have happened, sure, but you can see why people are a bit skeptical.

Yes, it's obviously a bit unfair because sometimes untainted eyes can be the most beneficial, but hopefully you can also see why there's a bit of skepticism.

Historical Discussions: 0.30000000000000004 (December 02, 2019: 765 points)

(765) 0.30000000000000004

765 points 7 days ago by beznet in 10000th position

0.30000000000000004.com | Estimated reading time – 12 minutes | comments | anchor

Language Code Result ABAP
WRITE / CONV f( '.1' + '.2' ).
WRITE / CONV decfloat16( '.1' + '.2' ).




0.1 + 0.2


with Ada.Text_IO; use Ada.Text_IO;
procedure Sum is
  A : Float := 0.1;
  B : Float := 0.2;
  C : Float := A + B;
  Put_Line(Float'Image(0.1 + 0.2));
end Sum;

3.00000E-01 3.00000E-01

MsgBox, % 0.1 + 0.2


int main(int argc, char** argv) {
    printf('%.17f\n', .1+.2);
    return 0;


Console.WriteLine('{0:R}', .1 + .2);
Console.WriteLine('{0:R}', .1f + .2f);
Console.WriteLine('{0:R}', .1m + .2m);






#include <iomanip>
#include <iostream>
int main() {
    std::cout << std::setprecision(17) << 0.1 + 0.2;


(+ 0.1 0.2)


<cfset foo = .1 + .2>


Common Lisp
(+ .1 .2)
(+ 1/10 2/10)
(+ 0.1d0 0.2d0)
(- 1.2 1.0)








puts 0.1 + 0.2
puts 0.1_f32 + 0.2_f32




import std.stdio;
void main(string[] args) {
  writefln('%.17f', .1+.2);
  writefln('%.17f', .1f+.2f);
  writefln('%.17f', .1L+.2L);

0.29999999999999999 0.30000001192092896 0.30000000000000000

print(.1 + .2);


Delphi XE5
writeln(0.1 + 0.2);


IO.puts(0.1 + 0.2)


0.1 + 0.2


Emacs Lisp
(+ .1 .2)


io:format('~w~n', [0.1 + 0.2]).


  real(kind=4) :: x4, y4
  real(kind=8) :: x8, y8
  real(kind=16) :: x16, y16
  ! REAL literals are single precision, use _8 or _16
  ! if the literal should be wider.
  x4 = .1; x8 = .1_8; x16 = .1_16
  y4 = .2; y8 = .2_8; y16 = .2_16
  write (*,*) x4 + y4, x8 + y8, x16 + y16

0.300000012 0.30000000000000004 0.300000000000000000000000000000000039

GHC (Haskell)
* 0.1 + 0.2 :: Double
* 0.1 + 0.2 :: Float

* 0.30000000000000004


* 0.3

0.1e 0.2e f+ f.


package main
import 'fmt'
func main() {
  fmt.Println(.1 + .2)
  var a float64 = .1
  var b float64 = .2
  fmt.Println(a + b)
  fmt.Printf('%.54f\n', .1 + .2)

0.3 0.30000000000000004 0.299999999999999988897769753748434595763683319091796875

println 0.1 + 0.2


Hugs (Haskell)
0.1 + 0.2


(0.1 + 0.2) print


System.out.println(.1 + .2);
System.out.println(.1F + .2F);




console.log(.1 + .2);


.1 + .2


K (Kona)
0.1 + 0.2






print(.1 + .2)
print(string.format('%0.17f', 0.1 + 0.2))




0.1 + 0.2


0.1 + 0.2




SELECT .1 + .2;


echo(0.1 + 0.2)


0.1 +. 0.2;;

float = 0.300000000000000044

#import <Foundation/Foundation.h>
int main(int argc, const char * argv[]) {
  @autoreleasepool {
    NSLog(@'%.17f\n', .1+.2);
  return 0;


echo .1 + .2; 
var_dump(.1 + .2);
var_dump(bcadd(.1, .2, 1));

0.3 float(0.30000000000000004441) string(3) "0.3"

perl -E 'say 0.1+0.2'
perl -e 'printf q{%.17f}, 0.1+0.2'
perl -MMath::BigFloat -E 'say Math::BigFloat->new(q{0.1}) + Math::BigFloat->new(q{0.2})'






[load 'frac.min.l']  # https://gist.github.com/6016d743c4c124a1c04fc12accf7ef17
[println (+ (/ 1 10) (/ 2 10))]

(/ 3 10)

SELECT select 0.1::float + 0.2::float;


PS C:\>0.1 + 0.2


Prolog (SWI-Prolog)
?- X is 0.1 + 0.2.

X = 0.30000000000000004.

0.1 + 0.2
~0.1 + ~0.2




Python 2
print(.1 + .2)
.1 + .2
float(decimal.Decimal('.1') + decimal.Decimal('.2'))
float(fractions.Fraction('0.1') + fractions.Fraction('0.2'))








Python 3
print(.1 + .2)
.1 + .2
float(decimal.Decimal('.1') + decimal.Decimal('.2'))
float(fractions.Fraction('0.1') + fractions.Fraction('0.2'))








print(.1+.2, digits=18)




Racket (PLT Scheme)
(+ .1 .2)
(+ 1/10 2/10)




raku -e 'say 0.1 + 0.2'
raku -e 'say (0.1 + 0.2).fmt('%.17f')'
raku -e 'say 1/10 + 2/10'
raku -e 'say 0.1e0 + 0.2e0'








puts 0.1 + 0.2
puts 1/10r + 2/10r




extern crate num;
use num::rational::Ratio;
fn main() {
    println!('{}', 0.1 + 0.2);
    println!('1/10 + 2/10 = {}', Ratio::new(1, 10) + Ratio::new(2, 10));



1/10 + 2/10 = 3/10

.1 + .2
RDF(.1) + RDF(.2)
RBF('.1') + RBF('.2')
QQ('1/10') + QQ('2/10')





["0.300000000000000 +/- 1.64e-16"]



0.1 + 0.2.


0.1 + 0.2
Decimal(0.1) + Decimal(0.2)




puts [expr .1 + .2]


Turbo Pascal 7.0
writeln(0.1 + 0.2);


static int main(string[] args) {
  stdout.printf('%.17f\n', 0.1 + 0.2);
  return 0;


Visual Basic 6
a# = 0.1 + 0.2: b# = 0.3
Debug.Print Format(a - b, '0.' & String(16, '0'))
Debug.Print a = b

0.0000000000000001 False

WebAssembly (WAST)
(func $add_f32 (result f32)
    f32.const 0.1
    f32.const 0.2
(export 'add_f32' (func $add_f32))
(func $add_f64 (result f64)
    f64.const 0.1
    f64.const 0.2
(export 'add_f64' (func $add_f64))




awk 'BEGIN { print 0.1 + 0.2 }'


0.1 + 0.2


0.1 0.2 + p


+ .1 .2


scala -e 'println(0.1 + 0.2)'
scala -e 'println(0.1F + 0.2F)'
scala -e 'println(BigDecimal('0.1') + BigDecimal('0.2'))'






echo '$((.1+.2))'


All Comments: [-] | anchor

GuB-42(10000) 7 days ago [-]

That's one of the worst domain name ever. When the topic comes along, I always remember about 'that single-serving website with a domain name that looks like a number' and then take a surprisingly long time searching for it.

I have written a test framework and I am quite familiar with these problems, and comparing floating point numbers is a PITA. I had users complaining that 0.3 is not 0.3.

The code managing these comparisons turned out to be more complex than expected. The idea is that values are represented as ranges, so, for example, the IEEE-754 '0.3' is represented as ]0.299~, 0.300~[ which makes it equal to a true 0.3, because 0.3 is within that range.

qwerty456127(4199) 6 days ago [-]

> That's one of the worst domain name ever. When the topic comes along, I always remember about 'that single-serving website with a domain name that looks like a number' and then take a surprisingly long time searching for it.

That's why we need regular expressions support in every search box, browser history, bookmarks and Google included.

erikwiffin(4105) 6 days ago [-]

It may be the worst domain name ever, but the site only exists because I thought that using '0' as a subdomain was a neat trick, and worked back from there to figure out what to do with it.

FWIW - the only way I can ever find my own website is by searching for it in my github repositories. So I definitely agree, it's not a terribly memorable domain.

mynameisvlad(4086) 7 days ago [-]

It's the first result for 'floating point site' on Google. Sure the domain itself is impossible to remember, but you don't have to remember the actual number, just what it stands for.

elwell(1705) 7 days ago [-]

> That's one of the worst domain name ever.

Maybe the creator's theory is that people will search for 0.30000000000000004 when they run into it after running their code.

LeanderK(4222) 7 days ago [-]

just add 0.1 and 0.2 in fp32 (?) accuracy if you can't remember the name :)

dang(179) 7 days ago [-]
myroon5(2177) 7 days ago [-]

I'd always wondered how automated these comments were

umanwizard(4025) 7 days ago [-]

FWIW, both of those can be expressed exactly by floating-point numbers ;)

beckerdo(10000) 6 days ago [-]

Please check some of the online papers on Posit numbers and Unum computing, especially by John Gustafson. In general, Unums can represent more numbers, with less rounding, and fewer exceptions than floating points. Many software and hardware vendors are starting to do interesting work with Posits.

StefanKarpinski(3742) 6 days ago [-]

Probably one of the more in depth technical discussions of the pros and cons of the various proposals that John Gustafson has made over the years:


ufo(10000) 7 days ago [-]

One small tip about printf for floating point numbers. In addition to '%f', you can also print them using '%g'. While the precision specifier in %f refers to digits after the decimal period, in %g the precision refers to the number of significant digits. The %g version is also allowed to use exponential notation, which often results in more pleasant-looking output than %f.

   printf('%.4g', 1.125e10) --> 1.125e+10
   printf('%.4f', 1.125e10) --> 11250000000.0000
kps(10000) 7 days ago [-]

And %e always uses exponential notation. Then there's %a, which can be exact for binary floats.

jonny_eh(1990) 7 days ago [-]

> It's actually pretty simple

The explanation then goes on to be very complex. e.g. 'it can only express fractions that use a prime factor of the base'.

Please don't say things like this when explaining things to people, it makes them feel stupid if it doesn't click with the first explanation.

I suggest instead 'It's actually rather interesting'.

throwaway40324(10000) 7 days ago [-]

Thanks for this.

Yep, I've thrown 10,000 round house kicks and can teach you to do one. It's so easy.

In reality, it will be super awkward, possibly hurt, and you'll fall on your ass one or more times trying to do it.

4ntonius8lock(10000) 7 days ago [-]

For including words in a sale pitch, I'd agree.

But this isn't a sales pitch. Some people are just bad at things. The explanation on that page require grade school levels of math. I think math that's taught in grade school can be objectively called simple. Some people suck at math. That's ok.

I'm very geeky. I get geeky things. Many times geeky things can be very simple to me.

I went to a dance lesson. I'm terribly uncoordinated physically. They taught me a very 'simple' dance step. The class got it right away. The more physically able got it in 3 minutes. It took me a long time to get, having to repeat the beginner class many times.

Instead of being self absorbed and expect the rest of the world to anticipate every one of my possible ego-dystonic sensibilities, I simply accepted I'm not good at that. It makes it easier for me and for the rest of the world.

The reality is, just like the explanation and the dance step, they are simple because they are relatively simple for the field.

I think such over-sensitivity is based on a combination of expecting never to encounter ego-dystonic events/words, which is unrealistic and removes many/most growth opportunities in life, and the idea that things we don't know can be simple (basically, reality is complicated). I think we've gotten so used to catering to the lowest common denominator, we've forgotten that it's ok for people to feel stupid/ugly/silly/embarrassed/etc. Those bad feelings are normal, feeling them is ok, and they should help guide us in life, not be something to run from or get upset if someone didn't anticipate your ego-dystonic reaction to objectively correct usage of words.

headmelted(2657) 7 days ago [-]

Ditto as I now feel stupid.

I read the rest of your reply but I also haven't let go of the possibility that we're both (or precisely 100.000000001% of us collectively) are as thick as a stump.

gpderetta(3631) 7 days ago [-]

there are only two kind of problems, trivial problems and those that you don't know how to solve (yet).

erikwiffin(4105) 7 days ago [-]

Thanks for the suggestion. I've updated the text.

tomca32(10000) 7 days ago [-]

The problem is that almost everything is simple once you understand it. Once you understand something, you think it's pretty simple to explain it.

On the other hand, people say 'it's actually pretty simple' to encourage someone to listen to the explanation rather than to give up before they even heard anything, as we often do.

Tade0(10000) 7 days ago [-]

I had to use google translate for this one, because I didn't suspect the translation to my language to be so literal.

My take is that this sentence is badly worded. How do these fractions specifically use those prime factors?

Apparently the idea is that a fraction 1/N, where N is a prime factor of the base, is rational in that base.

So for base 10, at least 1/2 and 1/5 have to be rational.

And given that a product of rational numbers is rational, no matter what combination of those two you multiply, you'll get a number rational in base 10, so 1/2 * 1/2 = 1/4 is rational, (1/2)^3 = 1/8 is rational etc.

Same thing goes for the sum of course.

So apparently those fractions use those prime factors by being a product of their reciprocals, which isn't mentioned here but should have been.

ggggtez(10000) 7 days ago [-]

>Why does this happen? It's actually rather interesting.

Did the text change in the last 15 minutes?

qwerty456127(4199) 6 days ago [-]

As soon as I've started developing real-life business apps I've started to dream about a POWER which is said to have hardware decimal type support. Javs's BigDecimal solves the problem on x86 but it is at least an order of magnitude more slow than FPU-accelerated types.

ernst_klim(10000) 6 days ago [-]

Well, if your decimals are fixed-point decimals, which is the case in finance, decimal calculations are very cheap integer calculations (with simple additional scaling in multiplication/division).

I just use Zarith (bignum library) in OCaml for decimal calculation, and pretty content with performance.

I don't think much domains needs decimal floating point that much, honestly, at least in finance and scientific calculations.

But I could be wrong, and would be interested in cases where decimal floating-point calculations are preferable over these done in decimal fixed-point or IEEE floating-point ones.

gowld(10000) 7 days ago [-]

This is a great shibboleth for identifying mature programmers who understand the complexity of computers, vs arrogant people who wonder aloud how systems developers and language designers could get such a 'simple' thing wrong.

hutzlibu(10000) 7 days ago [-]

' vs arrogant people who wonder aloud how systems developers and language designers could get such a 'simple' thing wrong.'

I never heard anyone complain that it would be simple to fix. But complaining? Yes - and rightfully so. Not every webprogrammer need to know the hw details and don't want to, so it is understandable that this causes irritation.

garyclarke27(4181) 6 days ago [-]

Postgresql figured this out many years ago with their Decimal/Numeric type. It can handle any size number and it performs fractional arithmetic perfectly accurately - how amazingly for the 21st Century! Is comically tragic to me that all of the mainstream programming languages are still so far behind, so primitive that they do not have a native accurate number type that can handle fractions.

josefx(10000) 6 days ago [-]

> how amazingly for the 21st Century!

Most languages have classes for that, some had them for decades in fact. Hardware floating point numbers target performance and most likely beat any of those classes by orders of magnitude.

lelf(18) 7 days ago [-]

That's only formatting.

The other (and more important) matter, — that is not even mentioned, — is comparison. E. g. in "rational by default in this specific case" languages (Perl 6),

  > 0.1+0.2==0.3
Or, APL (now they are floats there! But comparison is special)

      ⎕PP←20 ⋄ 0.1+0.2
      (0.1+0.2) ≡ 0.3
Athas(2860) 7 days ago [-]

Exactly what are the rules for the 'special comparison' in APL? That sounds horrifying to me.

mc3(10000) 7 days ago [-]

This is a good thing to be aware of.

Also the 'field' of floating point numbers is not commutative†, (can run on JS console:)

x=0;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; }; x+=1

--> 1.000000000000001

x=1;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; };

--> 1

Although most of the time a+b===b+a can be relied on. And for most of the stuff we do on the web it's fine!††

† edit: Please s/commutative/associative/, thanks for the comments below.

†† edit: that's wrong! Replace with (a+b)+c === a+(b+c)

mike_hock(10000) 7 days ago [-]

Isn't that more of an associativity problem than a commutativity problem, though?

1.0 + 1e-16 == 1e-16 + 1.0 == 1.0 as well as 1.0 + 1e-15 == 1e-15 + 1.0 == 1.000000000000001

however (1.0 + (1e-16 + 1e-16)) == 1.0 + 2e-16 == 1.0000000000000002, whereas ((1.0 + 1e-16) + 1e-16) == 1.0 + 1e-16 == 1.0

kstrauser(4198) 7 days ago [-]

Yep. The TL;DR of a numerical analysis class I took is that if you're going to sum a list of floats, sort it by increasing numeric value first so that the tiny values aren't rounded to zero every time.

thaumasiotes(3713) 7 days ago [-]

> Also the 'field' of floating point numbers is not commutative, (can run on JS console:)


    >> x = 0;
    >> for (let i=0; i<10000; i++) { x+=0.0000000000000000001; };
    >> x + 1
    >> 1 + x
You've identified a problem, but it isn't that addition is noncommutative.
gus_massa(1487) 7 days ago [-]

Note that the addition is commutative [1], i.e. a+b==b+a always.

What is failing is associativity, i.e. (a+b)+c==a+(b+c)

For example

(.0000000000000001 + .0000000000000001 ) + 1.0

--> 1.0000000000000002

.0000000000000001 + (.0000000000000001 + 1.0)

--> 1.0

In your example, you are mixing both properties,

(.0000000000000001 + .0000000000000001) + 1.0

--> 1.0000000000000002

(1.0 + .0000000000000001) + .0000000000000001

--> 1.0

but the difference is caused by the lack of associativity, not by the lack of commutativity.

[1] Perhaps you must exclude -0.0. I think it is commutative even with -0.0, but I'm never 100% sure.

teraflop(10000) 7 days ago [-]

Your example shows that floating-point addition isn't associative, not that it isn't commutative.

dspillett(4107) 7 days ago [-]

MS Excel tries to be clever and disguise the most common places this is noticed.

Give it =0.1+0.2-0.3 and it will see what you are trying to do and return 0.

Give it anything slightly more complicated such as =(0.1+0.2-0.3) and this won't trip, in this example displaying 5.55112E-17 or similar.

FabHK(3966) 6 days ago [-]

Kahan (architect of IEEE 754) has a nice rant on it:


(and plenty of other rants...:

https://people.eecs.berkeley.edu/~wkahan/ )

piadodjanho(10000) 7 days ago [-]

Are you sure it is not showing the exact answer because the the the cell precision set to a single decimal digit?

Ididntdothis(4192) 7 days ago [-]

I still remember when I encountered this and nobody else in the office knew about it either. We speculated about broken CPUs and compilers until somebody found a newsgroup post that explained everything. Makes me wonder why we haven't switched to a better floating point model in the last decades. It will probably be slower but a lot of problems could be avoided.

gowld(10000) 7 days ago [-]

Being a lot slower is a worse problem than being off by an error of 2^60. And if it isn't, then you simply choose a different numeric type.

masklinn(3065) 7 days ago [-]

> Makes me wonder why we haven't switched to a better floating point model in the last decades. It will probably be slower but a lot of problems could be avoided.

Pretty much all languages have some sort of decimal number. Few or none have made it the default because they're ignominiously slower than binary floating-point. To the extent that even languages which have made arbitrary precision integers their default firmly keep to binary floating-point.

snickerbockers(10000) 7 days ago [-]

There is no 'better floating point model' because floating point will always be floating point. Fixed point always has been and always will be an option if you don't like the exponential notation.

dragontamer(2857) 7 days ago [-]

> Makes me wonder why we haven't switched to a better floating point model in the last decades.

The opposite.

Decimal floating points have been available in COBOL from the 1960s, but seem to have fallen out of favor in recent days. This might be a reason why bankers / financial data remains on ancient COBOL systems.

Fun fact: PowerPC systems still support decimal-floats natively (even the most recent POWER9). I presume IBM is selling many systems that natively need that decimal-float functionality.

anchpop(3593) 7 days ago [-]

Many languages have types for infinite-precision rational numbers, for example Rational in Haskell.

throwaway2048(1384) 7 days ago [-]

Floating point is fundamentally a trade off between enumerable numbers (precision) and range between minimum/maximum numbers, it exists because fast operations on numbers are not possible with arbitrary precision constructs (you can easily have CPU/GPU operations where floating point numbers fit in registers, arbitrary precision by its very nature is arbitrarily large).

With many operations this trade off makes sense, however its critical to understand the limitations of the model.

ryandrake(4132) 7 days ago [-]

Wait, an entire office (presumably full of programmers) didn't understand floating point representation? What office was this? Isn't this topic covered first in every programming book or course where floating point math is covered?

w-j-w(10000) 7 days ago [-]

At this time understanding floating point numbers falls under 'programmer responsibility', and not languages follow the example set by C. Most languages are designed by people who find the idea of not understanding floating point numbers kind of absurd.

maxdamantus(10000) 7 days ago [-]

Unless you have a floating point model that supports arbitrary bases, you're always going to have the issue. Binary floats are unable to represent 1/10 just as decimal floats are unable to represent 1/3.

And in case anyone's wondering about handling it by representing the repeating digits instead, here's the decimal representation of 1/12345 using repeating digits:

acqq(1272) 7 days ago [-]

Decimal floating point is standardized since 2008:


But it's still not much used. E.g. for C++ it was proposed in 2012 for the first time


then revised in 2014:


...and... silence?


goosehonk(10000) 7 days ago [-]

When did RFC1035 get thrown under the bus? According to it, with respect to domain name labels, 'They must start with a letter' (2.3.1).

jlv2(10000) 7 days ago [-]

Long, long ago. 3com.com wanted to exist.

jdnenej(10000) 7 days ago [-]

Ages ago I guess. 1password doesn't start with a letter either.

jwilk(3671) 6 days ago [-]

All-digit host names have been allowed since 1989.


One aspect of host name syntax is hereby changed: the restriction on the first character is relaxed to allow either a letter or a digit. Host software MUST support this more liberal syntax.

knome(10000) 7 days ago [-]

The same document defines `in-addr.arpa` domains that have numeric labels.

The mandate of a starting letter was for backwards compatibility, and mentions it in light of keeping names compatible with email servers and HOSTS files it was replacing.

Taking a numeric label risks incompatibility with antiquated systems, but I doubt it will effect any modern browser.

DonHopkins(3245) 7 days ago [-]

The runner up for length is FORTRAN with: 0.300000000000000000000000000000000039

And the length (but not value) winner is GO with: 0.299999999999999988897769753748434595763683319091796875

exegete(10000) 7 days ago [-]

Those look like the same length

brundolf(2156) 7 days ago [-]

I remember in college when we learned about this and I had the thought, 'Why don't we just store the numerator and denominator?', and threw together a little C++ class complete with (then novel, to me) operator-overloads, which implemented the concept. I felt very proud of myself. Then years later I learned that it's a thing people actually use: https://en.wikipedia.org/wiki/Rational_data_type

Koshkin(4163) 7 days ago [-]

But rationals are more expensive to compute with (compared to floating-point; this is another example of the trade-off between performance and accuracy.)

elwell(1705) 7 days ago [-]
protomyth(96) 7 days ago [-]

Handling quantities with varying Unit of Measures is made quite a bit easier by using numerator and denominator pairs.

hzhou321(4150) 7 days ago [-]

You either feel smart by wondering why people don't use rationals, or feel smart by wondering why people use rationals.

simias(4157) 7 days ago [-]

An other compromise in to use fixed point which is effectively a rational with a fixed denominator. Extremely popular on machines which can handle integer arithmetics but not floating point (since you can trivially do fixed-point arithmetics using integer operations, you just need to be very careful when you handle overflows). If you look at the code of old school games (including classics like Doom if memory serves) the game engine used fixed-point to work on commodity hardware without FPU.

There's also BCD (binary coded decimal) that can solve some problems by avoiding the decimal-to-binary conversions if you're mainly dealing with decimal values. For instance 0.2 can't usually be represented in binary but of course it poses no problem in BCD.

tim333(1252) 7 days ago [-]

When I was reading about this I thought why don't the print functions just by default round to the nearest 10 decimal places or similar so 0.30000000000000004 prints as 0.3 unless you specify you don't want that. And I wrote a function in javascript to round like that though it was surprisingly tricky and messy to do so.

miketuritzin(3846) 7 days ago [-]

Reminds me of this Inigo Quilez article on experimenting with rendering using rational numbers: https://iquilezles.org/www/articles/floatingbar/floatingbar....

loopz(10000) 7 days ago [-]

It's actually in use in many places, for things like handling currency and money, and for when you get funny corner cases involving rounding such numbers and pooling the change.

Whenever I see someone handling currency in floats, something inside me wither and die a small death.

gowld(10000) 7 days ago [-]

'Why don't we just' because it's harder than one thinks.


and gets harder when you want exact irrationals too https://www.google.com/search?q=exact+real+arithmetic

megous(4179) 7 days ago [-]

Not all numbers are rational.

phoe-krk(3078) 7 days ago [-]

Some langs have that in their standard included batteries.

(Shameless Common Lisp plug: http://clhs.lisp.se/Body/t_ration.htm)

povik(10000) 7 days ago [-]

In the Go example, can someone explain the difference between the first and the last case?

ehsankia(10000) 7 days ago [-]

There's a link right below. It seems like

1. Constants have arbitrary precision 2. When you assign them, they lose precision (example 2) 3. You can format at as a arbitrary precision in a string (example 3)

In that last example, they are getting 54 significant digits in base 10.

threatofrain(3605) 7 days ago [-]

Use Int types for programming logic.

saagarjha(10000) 7 days ago [-]

Except when, you know, you can't.

pmarreck(4137) 7 days ago [-]

IEEE floating-point is disgusting. The non-determinism and illusion of accuracy is just wrong.

I use integer or fixed-point decimal if at all possible. If the algorithm needs floats, I convert it to work with integer or fixed-point decimal instead. (Or if possible, I see the decimal point as a 'rendering concern' and just do the math in integers and leave the view to put the decimal by whatever my selected precision is.)

saagarjha(10000) 7 days ago [-]

IEEE is deterministic and (IMO) quite well thought-out. What specifically do you not like about it?

virtualparrot(10000) 7 days ago [-]

I agree with this view, there's nothing more disgusting than non-determinism. The way computers rely on assumptions for the accuracy of a floating number is one that's contrary to the principles of logical thinking.

kstrauser(4198) 7 days ago [-]

Depends on the field. 99.9000000001% of the time, the stuff I do is entirely insensitive to anything after the third decimal point. And for my use cases, IEEE 754 is a beautiful stroke of genius that handles almost everything I ask from it. That's generally the case for most applications. If it wasn't, it wouldn't be so overwhelmingly universally used.

But again, there are clearly plenty of use cases where it's insufficient, as you can vouch. I still don't think you can call it 'disgusting', though.

enriquto(10000) 7 days ago [-]

you may dislike IEEE floats for many reasons, but not for being non-deterministic. Their operations are described by completely deterministic rules.

Fixed point is perfectly OK, if all your numbers are within a few orders of magnitude (e.g. money)

maxdamantus(10000) 7 days ago [-]

I feel like it should really be emphasised that the reason this occurs is due to a mismatch between binary exponentiation and decimal exponentiation.

0.1 = 1 × 10^-1, but there is no integer significand s and integer exponent e such that 0.1 = s × 2^e.

When this issue comes up, people seem to often talk about fixing it by using decimal floats or fixed-point numbers (using some 10^x divisor). If you change the base, you solve the problem of representing 0.1, but whatever base you choose, you're going to have unrepresentable rationals. Base 2 fails to represent 1/10 just as base 10 fails to represent 1/3. All you're doing by using something based around the number 10 is supporting numbers that we expect to be able to write on paper, not solving some fundamental issue of number representation.

Also, binary-coded decimal is irrelevant. The thing you're wanting to change is which base is used, not how any integers are represented in memory.

lopmotr(10000) 7 days ago [-]

Agree. All of these floating point quirks are not actually problems if you think of them as being finite precision approximations to real numbers, not in any particular base. Just like physical measurements of continuous quantities. You wouldn't be surprised to find an error in the 15th significant figure of some measurement or attempt to compare them for equality or whatever. So don't do it with floating point numbers either and everything will work perfectly.

Yes, there are some exceptions where you can reliably compare equality or get exact decimal values or whatever, but those are kind of hacks that you can only take advantage of by breaking the abstraction.

Akababa(10000) 7 days ago [-]

If you only use decimals in your application, it actually is a fix because you can store the numbers you care about in exact precision. Of course it's not really a fix if you're being pedantic but for a lot of simple UI stuff it's good enough.

mcv(4213) 7 days ago [-]

The big issue here is what you're going to use your numbers for. If you're going to do a lot of fast floating point operations for something like graphics or neural networks, these errors are fine. Speed is more important than exact accuracy.

If you're handling money, or numbers representing some other real, important concern where accuracy matters, most likely any number you intend to show to the user as a number, floats are not what you need.

Back when I started using Groovy, I was very pleased to discover that Groovy's default decimal number literal was translated to a BigDecimal rather than a float. For any sort of website, 9 times out of 10, that's what you need.

I'd really appreciate it if Javascript had a native decimal number type like that.

jancsika(10000) 7 days ago [-]

Hm... what happens if you've got a neural network trained to make decisions in the financial domain?

Is there a way to exploit the difference between numeric precision underlying the neural network and the precision used to represent the financial transactions?

umanwizard(4025) 7 days ago [-]

Decimal numbers are not conceptually any more or less exact than binary numbers. For example, you can't represent 1/3 exactly in decimal, just like you can't represent 1/5 exactly in binary.

When handling money, we care about faithfully reproducing the human-centric quirks of decimal numbers, not 'being more accurate'. There's no reason in principle to regard a system that can't represent 1/3 as being fundamentally more accurate because it happens to be able to represent 1/5.

Gibbon1(10000) 7 days ago [-]

> I'd really appreciate it if Javascript had a native decimal number type like that.

Was proposed in the late 90's Mike Cowlishaw but the rest of the standards committee would have none of it.

tDude-Sans-Rug(10000) 6 days ago [-]

In the world of money, it is rare to have to work past 3 decimal places. Bond traders operate on 32nds, so that might present some difficulties, but they really just want rounding at the hundreds.

Now, when you're talking about central bank accruals (or similar sized deposits) that's a bit different. In these cases, you have a very specific accrual multiple, multiplied by a balance in the multiple hundreds of billions or trillions. In these cases, precision with regards to the interest accrual calculation is quite significant, as rounding can short the payor/payee by several millions of dollars.

Hence the reason bond traders have historically traded in fractions of 32.

A sample bond trade:

'Twenty sticks at a buck two and five eights bid' 'Offer At 103 full' 'Don't break my balls with this, I got last round at delmonicos last night' 'Offer 103 firm, what are we doing' '102-7 for 50 sticks' 'Should have called me earlier and pulled the trigger, 50 sticks offer 103-2' 'Fuck you, I'm your daughter's godfather' 'In that case, 40 sticks, 103-7 offer' 'Fuck you, 10 sticks, 102-7, and you buy me a steak, and my daughter a new dress' '5 sticks at 104, 45 at 102-3 off tape, and you pick up bar tab and green fees' 'Done' 'You own it'

That's kinda how bonds are traded.

Ref: Stick: million Bond pricing: dollar price + number divided by 32 Delmonicos: money bonfire with meals served

mFixman(10000) 6 days ago [-]

I enjoy Haskell's approach to numbers.

The type of any numeric literal is any type of the `Num` class. That means that they can be floating point, fractional, or integers 'for free' depending on where you use them in your programs.

`0.75 + pi` is of type `Floating a => a`, but `0.75 + 1%4` is of type `Rational`.

otabdeveloper4(10000) 6 days ago [-]

'Decimal' is a red herring. The number base doesn't matter. (And what are you going to do when you need currency coversions, anyways?)

Floats are a digital approximation of real numbers, because computers were originally designed for solving math problems - trigonometry and calculus, that is.

For money you want rational numbers, not reals. Unfortunately, computers never got a native rational number type, so you'll have to roll your own.

joppy(10000) 6 days ago [-]

Hear, hear! It would be great if javascript had any integral type that we could build decimals, rationals, arbitrarily-large integers and so on off. It's technically doable with doubles if you really know what you're doing, but it would be so much easier with an integral type.

brazzy(3912) 6 days ago [-]

> If you're going to do a lot of fast floating point operations for something like graphics or neural networks, these errors are fine. Speed is more important than exact accuracy.

Um... that really depends. If you have an algorithm that is numerically unstable, these errors will quickly lead to a completely wrong result. Using a different type is not going to fix that, of course, and you need to fix the algorithm.

seangrogg(10000) 6 days ago [-]

I'd agree for saner defaults, especially in web development. I can understand that if you want to have strictly one number type it may make sense to opt for floating point to eke out the performance when you do need it, but I'd rather see high-precision as the default (as most expect that you'd be able to write an accurate calculator app in JavaScript without much work) and opt-in to the benefit of floating point operations.

skohan(10000) 7 days ago [-]

This is part of the reason Swift Numerics is helping to make it much nicer to do numerical computing in Swift.


enriquto(10000) 7 days ago [-]

what is the number representation in swift? Looking at your link it seems to be plain ieee floats. In that case, would't it have the same behavior?

mvelie(10000) 7 days ago [-]

Swift also has decimal (so does objective-c) which handles this properly. See https://lists.swift.org/pipermail/swift-users/Week-of-Mon-20... to see how swift's implementation of decimal differs from obj-c.

ChuckMcM(629) 7 days ago [-]

That is why I only used base 2310 for my floating point numbers :-). FWIW there are some really interesting decimal format floating point libraries out there (see http://speleotrove.com/decimal/ and https://github.com/MARTIMM/Decimal) and the early computers had decimal as a native type (https://en.wikipedia.org/wiki/Decimal_computer#Early_compute...)

ergfdseragf(10000) 7 days ago [-]

The multiplication of the first 5 primes ;)

lordnacho(1968) 7 days ago [-]

While it's true that floating point has its limitations, this stuff about not using it for money seems overblown to me. I've worked in finance for many years, and it really doesn't matter that much. There are de minimis clauses in contracts that basically say 'forget about the fractions of a cent'. Of course it might still trip up your position checking code, but that's easily fixed with a tiny tolerance.

dionian(10000) 7 days ago [-]

when the fractions actually dont matter... its so painless just to just store everything in pennies rather than dollars (multiply everything by 100)

Historical Discussions: The .Org Fire Sale: How it sold for less than half its valuation (December 03, 2019: 747 points)

(747) The .Org Fire Sale: How it sold for less than half its valuation

747 points 7 days ago by metasj in 10000th position

blogs.harvard.edu | Estimated reading time – 8 minutes | comments | anchor

The .Org Fire Sale: How it sold for less than half its valuation

Part 2 in a series. (See also Part 1: The Great Dot Org Heist.) Updates: Moz letter, El Reg, registry agreement, ISOC forum + letter, Wyden

Ethos Capital seems on track to complete their takeover of .org early next year. ICANN claims it is powerless to stop the acquisition. ISOC president Andrew Sullivan suggested nothing but a court order would make ISOC change their minds. (If the sale concerns you, you can write to the Virginia state DA, who has to approve the sale via the Orphans Court.)

There are still many unanswered questions. Sullivan's presentation of the offer to the ISOC Board highlighted a need for speed and secrecy. Details were redacted from the board minutes, and have been released grudgingly. Only last Friday did the price of the acquisition ($1.1B) finally emerge, which ISOC insists is a good price (or was before the price caps were lifted), but which most consider well below the market value of .org. (For reference, here's PIR's 990 and annual report: $90M revenue, $60M gross margin, 77% renewal rate).

Sullivan shared some conflicting thoughts in an interview with The Register: he thinks not many people care about the sale; public pushback has been strong; the sale would not have happened if there had been public discussion.

Mozilla has compiled Questions about .org into a public letter, asking both ISOC and ICANN to answer them before concluding this sale.

Measuring the worth of a legacy registry

While there is a range of estimates out there for the true value of .org, the sale price is on the low end under conservative assumptions. Lance Wiggs (investor, consultant, former councillor for InternetNZ) estimates that the sale undervalues the registry by $1B-10B, depending on how it is managed, and that the investors are taking almost no risk, for a chance at a 5x return and ample opportunities to flip the registry to a new buyer in the next few years.

How did this happen? Among other things:

  • No competitive bid. (ISOC suggests that they had at least two bids. But the first public mention of bids was in the minutes of their 10/28 board meeting, and by the next day Sullivan entered into exclusive talks with Ethos. ISOC says they considered a public auction, but Ethos said they would not participate in such an auction(!) )
  • No public market / sale analysis. (This makes it hard to determine whether the parties involved built in incentives to close a deal quickly, even if it was well under the market rate for the registry. At least one independent assessment put the value closer to $4B)
  • No public story of how the deal came together. (The outline: Nevett is approached by Ethos for the first time in September, and has a complete offer to put before the ISOC board by late October, developed in private.)
  • No discussion of alternatives to financing an endowment. (National bond funds and other options have been suggested that could have allowed ISOC to diversify its investments without selling PIR outright.)
  • Analyses based on an assumption of no future growth in price or revenue. (Sullivan mentioned that the purchase price was a high multiple of EBITDA, but that's a poor metric when the buyer is doubling prices every 7 years. And all TLD agreements now have 10-yr terms and presumptive right of renewal, so there's no reason to expect the registry to change hands in the future.)
  • No assessment of future impact or development of the registry. (Many questions about pricing, censorship, and squatting need addressing, for Ethos and for whoever it sells the registry to later. The impact on people + orgs in less-developed parts of the world bears addressing in detail.)
  • ISOC as an entity never embraced running a registry, and may have been looking to get out of the business. (per Kieren McCarthy, Afilias convinced them to do it in 2002 – and have managed the registry for a cut of gross revenue, ever since. On the other hand, Sullivan ran ops and name services at Afilias for years...)

How to monetize .org if you must

Once the acquisition is finalized next year, there will be no constraints on Ethos short of their name, and people talking with their feet. They offered to try to keep price hikes predictable and only 5x the US interest rate... but there is no enforcement or even monitoring of this. The cost of switching domains for established organizations is large, so they can experiment boldly without denting their user base. So what might they do?

  • Maintain a 10% annual increase for domains with the least demand.
  • Create multiple price tiers for popular domains, up to 100x the base rate.
  • Open a registry-owned registrar and start undercutting competition.
    • Encourage current holders to renew for 10 years, to soften the blow of price hikes, and raise cash up front to offset the cost of the acquisition.
  • Package and sell data about domain holders and purchases to third parties
  • Explore a merge with Donuts. .org is more valuable in combination, saving on overhead and data analysis, raising the price floor for other TLDs when .org prices rise. Cross-sell across domains. Take the new entity public.
  • Explore a sale back to Verisign.
  • Explore acquisition by other investment groups. Some nations have venture arms that might have extra reason to run .org, for even the chance of influencing what can be hosted on those domains.

Managing an ISOC endowment

Meanwhile, the Internet Society will have a $1.1B endowment to look after, something they have never done before. (Though they did just open a grant-giving Foundation earlier this year.) At a community forum, ISOC representatives said that they were committed to setting up this endowment, but have made no decisions regarding how the endowment would be set up or invested.

Sullivan suggested that their goal is to return roughly the same annual revenue as they had gotten that year from PIR — around $45M. Of course this time without the possibility of expanding an underlying business year by year.

As with the sale details, details about the endowment remain private. Board minutes suggest they may stand up a new entity, the "Connected Giving Foundation", to invest the endowment with guidance from Goldman Sachs, who advised on the acquisition.

Ongoing commentary

Senator Ron Wyden: "[Extremely concerning. I'm looking into how this sale happened, what can be done, and what it will mean for non-profits and users."

The ISOC board held a public forum in which it again stated that only a court order would change its decision, and published a regretful letter in which they promised to show more empathy for community concerns.


6 Comments so far Leave a comment

Leave a comment Line and paragraph breaks automatic, e-mail address never displayed, HTML allowed: <a href='' title=''> <abbr title=''> <acronym title=''> <b> <blockquote cite=''> <cite> <code> <del datetime=''> <em> <i> <q cite=''> <s> <strike> <strong>

All Comments: [-] | anchor

lancewiggs(4008) 7 days ago [-]

I wrote about this too - linked in the article. https://lancewiggs.com/2019/12/01/did-isoc-leave-1-billion-o...

The travesty is that ISOC has given up a sure-fire stream of $55+ million/year in tax-free income, along with the ability to easily grow that to over $100m/year with price increases - all for just over $1.1 billion.

As any r/personalfinance reader can tell you a rule of thumb for endowments is to spend a maximum of 4% of your assets each year. This means $44m from the $1.1bn, which means ISOC is immediately worse off than they were forecasting for this year (~$55m). Alternatively use the Yale method, which in today's low-return market will yield similar or worse results.

Moreover it's clear that ISOC are not behaving as the sharpest of investors, so we can imagine that the endowment might be be poorly managed or over-spent.

annoyingnoob(10000) 6 days ago [-]

There is a chance that the fire sale price will help keep .org prices from skyrocketing. Maybe they did .org domains a favor.

LegitShady(10000) 6 days ago [-]

I feel like you're missing some capital recovery factors and alternatives analysis to be so strong with your statement.

deepaksurti(1390) 6 days ago [-]

>> is to spend a maximum of 4% of your assets each year.

OT. Isn't 4% also the rule of early retirement, that is if you can live of 4% of your savings you can retire?

Can anyone clarify if this rule applies for both individual and corporate? If so, how would be even more interesting to know?

4ntonius8lock(10000) 6 days ago [-]

I appreciate your excellent write up. When I read the allegations in the original article I kinda wanted more detail. You break it down very well with great attention to detail.

I'd like to place this here for those who only read comments:

.org registry rights belong to a non-profit - the rights were sold to a private equity group - somewhere between 50% and 90% below market rate. It was based on self dealing of the people given stewardship of the non-profit that manages .org

Basically, this is privatization Russian style. Not good. Even if you like privatization, no-bid stuff is just wrong.

Want to help support the democratic institutions which hopefully won't fail us? Look here: https://drewdevault.com/2019/11/29/dotorg.html

mattrp(10000) 6 days ago [-]

I don't know if it's fair to say they could have easily doubled but your point about not acting like investors is fair. In contrast, AAA, USAA, and AARP have chosen to leverage their assets to enter new lines of business while Isoc simply sold their's off. I am not familiar with what governance issues might be in play here but the simpleton like me would ask, what alternatives did ISOC evaluate before taking a buyout offer from a insider-linked entity that ironically calls itself "ethos"...

cameldrv(4206) 6 days ago [-]

The bigger issue is that org was given to PIR to manage in the public interest. It was not supposed to even be a moneymaker for ISOC, they were just supposed to be the stewards of .org in the public interest. The fact that it's worth even $1 billion shows that they're operating it in the interest of the ISOC and not the public interest. ICANN should simply create a new entity that will charge break-even fees for registrations and stop trying to tax .org registrants with mandatory charitable donations to a dubious charity.

xwdv(10000) 6 days ago [-]

There ain't nothing "sure-fire" in this world when it comes to income son. A bird in the hand is worth two in the bush. Those armchair investors from reddit don't know the situation .

forrestthewoods(3331) 6 days ago [-]

Exchanging 10 to 20 years of potential revenue for a flat check seems like a pretty good deal.

zapita(2803) 7 days ago [-]

The article says that a national bond fund may have fronted the endowment without turning .org into a for-profit. What does this mean?

lalaland1125(4092) 7 days ago [-]

They could have sold bonds (raised money via debt) as an alternative to selling .org. They probably would have gotten a good yield considering the stability of the .org revenue stream.

slantedview(3658) 6 days ago [-]

'Sullivan suggested that their goal is to return roughly the same annual revenue as they have been getting from PIR — around $50M. Of course this time it would come without the possibility of expanding the underlying business year by year.'

This hurts my head. Needless to say, the returns on this fund will be _far_ less assured than the returns on simply maintaining the .org business as it was (especially with Goldman managing the fund).

Impressively, even ISOC comes out a loser from this deal. Only Ethos wins, but then, that was surely the point.

ohashi(2463) 6 days ago [-]

Waiting to hear who are ISOC gets new gigs in the Ethos orbit.

edgefield0(4180) 6 days ago [-]

Is there a benefit to locking in pricing today for say 10 years? I assume the major registries, such as namecheap and godaddy, will allow a longer term buy in?

patrickdavey(3864) 6 days ago [-]

Another way to look at it - is there a downside to locking in the current price for 10 years? It doesn't _seem_ if Ethos do end up with it that the prices will go anywhere but up.

I renewed my .org for 10 years.

dropmann(10000) 6 days ago [-]

Wait, if you can no longer trust ICANN, does not that mean you cannot trust the whole internet (domain name system) anymore?

metasj(10000) 5 days ago [-]

I would assume that ICANN itself may give up its non-profit status in time. That doesn't make them untrustworthy in general -- they will only exist so long as the system they administer is reasonably useful -- just trustworthy for fewer things (don't expect them to implement any sanity check on prices, for instance)

dependenttypes(10000) 6 days ago [-]

This is a good chance to stop our dependence on DNS and move to things such as .onion domains instead (which by the way help avoid the whole certificate CA mess).

matheusmoreira(10000) 6 days ago [-]

Absolutely agree. Every site should also available on the onion network. This would put some pressure on DNS operators. This would also help dissociate the onion network from criminal activity. The more sites become available as hidden services, the more legitimate the network becomes.

zrm(3907) 6 days ago [-]

> This is a good chance to stop our dependence on DNS and move to things such as .onion domains instead (which by the way help avoid the whole certificate CA mess).

.onion isn't a direct competitor for DNS. It does some of the things DNS does (e.g. a consistent identifier that sticks even if your IP address changes), and even does some things DNS doesn't (e.g. encrypted transport), but the names aren't human-readable. And it compromises the security if you try to pretend that they are by generating pretty keys, because the random junk on the end there is important.

Namecoin, on the other hand, is a direct competitor for DNS.

However, DNS itself is pretty well federated. You're at the mercy of the TLD operator, but that only means you need to be careful to choose a trustworthy one. On the other hand, if you had asked last year which of the TLD operators would be the least likely to screw you over, a lot of people would have said .org, so... maybe there is something to this whole cryptographic trust thing.

kingludite(10000) 6 days ago [-]

I believe the problem is the lack of protocols implemented in browsers. It's needlessly fragile to have a single route to the data.

'Web pages only last about 100 days on average'[1]

http was nice, we gave it a good spin, it was certainly close enough for the cigar but in all honesty... where you have 100 days to find the content you want to consume it is not even a reasonable approach. I bet a lot of it is still available some place but whoever archived it legally may not distribute it without permission.

I really don't care if TOR, IPFS, freenet or zeronet work out of the box. If it means access to more content its great. I don't even know how to use gopher atm.

Good stuff is happening[2] but where they apparently chose to make it an extension is pretty lame.


If the user types 'salvation army' into the browser we know what they want. Selling the rights to deny access is not what we need.

[1] - http://blog.archive.org/2015/02/11/locking-the-web-open-a-ca...

[2] - https://blog.ipfs.io/2019-10-08-ipfs-browsers-update/

TomMckenny(10000) 6 days ago [-]

Honest question: since .org is relied on world wide, why are just two US state courts the only ones with the legal power to review the sale?

kick(10000) 6 days ago [-]

Why would anywhere else have legal power to? .org doesn't sell domains directly to consumers, and ICANN is intentionally centralized, up until a few years ago being owned by the US government. US-owned US-created top-level domains are hardly an international affair.

You could argue that DNS is broken, and that ICANN is bad, but there's no legal argument for .org being subject to foreign governments.

freddie_mercury(10000) 6 days ago [-]

You've got the causality backwards. It isn't 'the whole word uses this, so we decided to put two US state courts in charge of it'.

It was 'only two US state courts are in charge of it but the whole world decided to start using it anyway'.

No one forced anyone to use the US DNS system. They all knew what they were signing up for when they joined the public internet in the 80s and 90s and haven't spend any time or money lobbying for a change.

agwa(2998) 7 days ago [-]

> Take a page from the Donuts book: create multiple price tiers for popular domains, up to 100x the base rate.

> Raise rates for long-time owners of common words. They weren't using that premium space anyway.

This is forbidden by the .org registry agreement, 2.10(c): https://www.icann.org/sites/default/files/tlds/org/org-agmt-...

metasj(10000) 6 days ago [-]

Ah, thanks -- you're right, the second is prohibited; corrected. The first seems fine per 2.10(c) as long as the registrant agrees on first registration that renewals will be expensive.

cannonedhamster(10000) 6 days ago [-]

And amazingly won't matter under the new owners who get to write their own rules. That's why everyone is upset. There was a vote that allowed .org to set whatever prices they wanted right before the sale.

jacquesm(43) 6 days ago [-]

Isn't there a technical resolution possible here where outsiders set up a new root for .org and people can change their allegiance and leave them to rot? That way Ethos capital (what a disingenuous name) paid $1B for nothing at all.

Accujack(10000) 5 days ago [-]

Yes, you just have to get everyone who might want to access a .org address or their DNS provider to recognize the new root.

That's the problem - you have to get organizations - some of whom are deeply invested in making money off of DNS - to agree to not do so.

Or you just ignore the existing system and build a new one... once enough desirable sites are in the new one, then the old one will fade away.

leibnitz27(10000) 6 days ago [-]

I bought an org domain back in the 90s, and have been using it as my personal domain (i.e. also primary email address) ever since.

While, granted, I was perhaps a little silly to go org (it seemed like a good idea back then!), it's mildly terrifying that my personal footprint on the web of 20+ years can now be held to ransom by a random VC firm, and to keep my own email address I might have to pay an additional $$$ annually.


tqkxzugoaupvwqr(10000) 6 days ago [-]

Extend your domain lease by 10 years at the current yearly fee. Gives you some time to migrate if you choose to.

romaaeterna(10000) 7 days ago [-]

If this is really the case, someone(s) very possibly took and gave bribes, and we're going to see Federal scrutiny all over this.

jpdus(3133) 6 days ago [-]


I can't believe ANY possible explanation (not even incompetency in this case) except direct or indirect bribery.

Really sad to see more and more of theses cases where Non-Profits sell out (e.g. OpenAI), I wonder whether this is a byproduct of people sozialized in the age of hyper-capitalism and consumerism...

TheRealPomax(3909) 6 days ago [-]

No, we won't. Not unless you help set that in motion. Have you contacted the DA already?

Traster(10000) 6 days ago [-]

The US has done a fantastic job of legalizing its bribes. It's not bribery! It's lobbying! It's not corruption, it just happens to be that the guy in charge of the agency for co-ordinating economic policy is also the head of Goldman Sachs. It's not corruption, because we made it legal and so it can't be corruption, because we've redefined corruption. It's great!

This is one of the reasons why people around the world are a little sceptical of American exceptionalism.

philipn(2176) 7 days ago [-]

Are registries allowed to charge different prices for different domain names?

E.g. can they ask google.com for $1B to renew and mygrandmascookiecompany.com for $20 to renew?

agwa(2998) 7 days ago [-]

In the case of .org, non-uniform renewal pricing is only allowed if 'the applicable registrant expressly agreed in its registration agreement with registrar to higher Renewal Pricing at the time of the initial registration of the domain name following clear and conspicuous disclosure of such Renewal Pricing to such registrant'

So Ethos wouldn't be able to screw over existing registrants with non-uniform renewal pricing.

Source: Section 2.10(c) of https://www.icann.org/sites/default/files/tlds/org/org-agmt-...

joedavison(4189) 7 days ago [-]

This depends on the TLD (top level domain) in question.

For .com, .net, .org it has historically not been possible to price discriminate like this.

For the 'new TLDs' (the explosion of new extensions we have seen in recent years), the registry contract is different and they are indeed allowed to do this. They call it 'premium pricing'.

Part of the big outcry about the recent changes to .org is that it brings it closer to the 'new TLD' model, which disfavors the registrant.

mortenjorck(1098) 7 days ago [-]

I wonder what the level of awareness of this is in the nonprofit community itself. Something like the National Council of Nonprofits would seem to be in a good position to file a suit or at least raise awareness among its members who might be interested in forming a class.

While a major charity like the Salvation Army certainly doesn't care if a single, sub-$100 annual expense doubles or even goes up by a factor of ten, thousands of small organizations across the country might care enough to band together and take action.

C1sc0cat(10000) 6 days ago [-]

Former member of the worker coop that bid to run .org when ISOC won here.

Technically .org is not just for American style 'non profits', I it was and should be any thing else that doesn't fit the other big 5 eg jwz.org.

That was the problem a lot of shady stuff goes on in the Charity world ('but its for charity') notorious for bullying often much worse than the behaviour of wall street or city bankers and traders.

I feel that a more sensible approach such as ours would have been better served - as coop members tend to be stroppy bastards and would have stood up for the common good - a lot of our ISP side in Manchester where members of alt 2600 .

metasj(10000) 7 days ago [-]

Related past threads:

'Save .org': https://news.ycombinator.com/item?id=21611677

'Take action to save .org': https://news.ycombinator.com/item?id=21664582

'Why I Voted to Sell .org': https://news.ycombinator.com/item?id=21656960

'ISOC sold the .org registry to Ethos Capital for $1.1B' https://news.ycombinator.com/item?id=21667355

nathcd(4033) 6 days ago [-]

Dupe of 'Take action to save .org' with some discussion: https://news.ycombinator.com/item?id=21677533

lazyguy2(10000) 6 days ago [-]

The reason it sold at half it's valuation is because the valuation was bullshit.

It's like some guy claiming his classic car is worth 25,000 dollars. If nobody is offering 25k for it then it's not worth 25k. Period, end of story.

goatinaboat(10000) 6 days ago [-]

the valuation was bullshit

Valuation of a cashflow is a very well understood thing. There is literally a button on my calculator that just does it. And a built-in function in Excel.


lancewiggs(4008) 6 days ago [-]

They didn't appear to shop the offer.

It's like that 'some guy' selling his classic car to a classic car dealer who was walking by, without first testing the price with experts or posting an advertisement.

asdff(10000) 6 days ago [-]

It would be like you valuing the car at 25k then selling it to your buddy for 15k without even listing the car on ebay first to feel the price.

akiselev(4111) 6 days ago [-]

The entire deal was put together in secret within the last 2 months. We don't know what anyone else would have offered because no one else was given the chance.

tinus_hn(10000) 6 days ago [-]

The deal was closed in two days with no other offers allowed.

xivzgrev(3953) 6 days ago [-]

Ok honest question fellow comment reader - what are you going to do about it? This is the 5th or 6th article I've seen on this transaction, with hundreds of comments each.

Ethos Capital does not give 2 shits about your comments here.

Are you writing to the DA like this article suggested? What else can you do?

Myself, I don't personally care that much. But I see a lot of people here obviously do, and I don't really see that energy translating into action. I would like to see it move forward in a positive direction, so I'm asking the question of you - you don't like it, what are you going to do about it besides complain here?

matthewdgreen(10000) 6 days ago [-]

The DA (and others) are a lot more likely to care about this if it gains press attention — preferably national and preferably well beyond the HN audience. Keep sending those emails to the DA, by all means. But if anything happens to this deal, it's going to be 100% driven by press attention.

tinus_hn(10000) 6 days ago [-]

I don't know if it is allowed but it sure would be amusing if ICANN decided to grant .org to someone else now, leaving Ethos with a worthless carcass.

C1sc0cat(10000) 6 days ago [-]

Anyone know anyone with a spine at ICANN I am sure that I could find people who might be interested Ivan Pope for one.

enjoyyourlife(3681) 6 days ago [-]

The reason for this is because former Ethos members are involved with ICANN and are probably making money because of the sale

quantified(10000) 6 days ago [-]

Follow the money.

If they invest with GS, see how much is left after 5-7 years.

apexalpha(10000) 6 days ago [-]

I still can't really believe someone can sell a building block of the world wide web like .org tld. Who runs .net? .com? .edu? .info? Can they be sold, too?

f4ewagy34aew(10000) 6 days ago [-]

Yes all of those are private now. .com is still with ICANN (which is now an NGO), .net is now VeriSign, .edu is Educause NGO, .info is Afilias (coop formed by several independent domain registrars)

Historical Discussions: Web-assembly powered WYSIWYG LaTeX Editor, supporting nearly all LaTeX package (December 05, 2019: 735 points)

(735) Web-assembly powered WYSIWYG LaTeX Editor, supporting nearly all LaTeX package

735 points 4 days ago by LegitGandalf in 10000th position

github.com | Estimated reading time – 5 minutes | comments | anchor


A Browser-based Fast LaTeX Visual Editor.

Key features:

  2. Fast compilation thanks to LaTeX checkpointing
  3. Cloud file storage

Try it here: https://www.swiftlatex.com/oauth/login_oauth?type=sandbox

Short Introduction

SwiftLaTeX is a Web-browser based editor to create PDF documents such as reports, term projects, slide decks, in the typesetting system LaTeX. In contrast to other web-based editors SwiftLaTeX is true WYSIWYG, What-you-see-is-what-you-get: You edit directly in a representation of the print output. You can import a LaTeX document at any stage of completeness into SwiftLaTeX. You can start a new document with SwiftLaTeX, or you can use SwiftLaTeX for final copy-editing. For advanced operation you can edit the so-called LaTeX source code, which gives you more fine-grained control. SwiftLaTeX is collaborative; you can share your project with others and work on it at the same time. SwiftLaTeX stores your data in the cloud under your account; currently it supports Google Drive and DropBox.


You are welcome to host SwiftLaTeX by yourself according to AGPL licence. And you can also use our web https://www.swiftlatex.com.

Run SwiftLaTeX using Docker in 3 Steps. (We will release the docker image in docker hub soon)

  1. Install Docker
  2. Run 'docker build . -t swiftlatex/swiftlatex'
  3. Run 'docker-compose up'

Run SwiftLaTeX using Python3 in 3 Steps

  1. Install Python3 && Pip3
  2. Run 'pip3 install -r requirements.txt'
  3. Run 'python3 wsgi.py'

Open url 'https://localhost:3000', and enjoy writing.

Adding Google Drive and Dropbox Support

  1. You first need to be a Google API Developer to retrieve a Google API Client ID and Secret. See here (https://developers.google.com/identity/protocols/OAuth2)
  2. Edit config.py and put your Client ID and Secret Inside. (You can use environment variables instead.)


About LaTeX packages

All packages are dynamically loaded from our file server and cached locally. Our file server has almost all the packages. If you want to host the file server by yourself, you can checkout another repo: https://github.com/elliott-wen/texlive-server

About Engines

Currently, this engine is built atop pdftex. So no unicode supported. We are working to port xetex in future release. The engine source code is hosted in https://github.com/SwiftLaTeX/PdfTeXLite. It is unusable so far as we need more time to upload and tidy up the source codes. Stay tuned.


Bug Fixes

  1. WYSIWYG Formulas are absolute positioned, therefore, the correct display only comes after a compilation. Reductant spaces occurs between words.
  2. Slow Upload to Google Our system abstracts your cloud storage as a POSIX-like file system to simplify user interface implementation at the cost of a little bit performance. We are working hard to improve our implementation to reduce the network turn around time.
  3. Sharing Files only works on Google
  4. Checkpointing breaks certain projects.

Pending Features

  1. Vertical Spilt View
  2. Adding XeTeX support. Clean up and Release Engine Source Codes.
  3. Tidy-up all the JS files.
  4. Add Github and S3 storage support.


As an open source project SwiftLaTeX strongly benefits from an active community. It is a good idea to announce your plans on the issue list. So everybody knows what's going on and there is no duplicate work.

Spread the word

The easiest way to help on the development with SwiftLaTeX is to use it! Furthermore, if you like SwiftLaTeX, tell all your friends and colleagues about it.

Bug Reports

User feedback is highly welcome. If you wanna report bugs regarding some TeX documentations not compiling. Please attach the snippets so that we can look into it.

Contributon and Copyright

If you are sending PR requests, you own the copyright of your contribution, and that you must agree to give us a license to use it in both the open source version, and the version of SwiftLaTeX running at www.swiftlatex.com, which may include additional changes. For more details, you can see https://www.swiftlatex.com/contribute.html.

Happy New Year~

Thank you very much for your interest and support:) We are very happy to receive such a warm response. We are currently overwhelmed by our full-time jobs at uni. But we will try our best to monitor this repo and keep improving the code daily. Really appreciate your patience and wish you all a wonderful holiday season:)

Research Paper

If you are interested in reading tech jargons, you could have a look at https://dl.acm.org/citation.cfm?id=3209522&dl=ACM&coll=DL (Though some stuff in the paper is outdated.)

All Comments: [-] | anchor

radarsat1(4171) 4 days ago [-]

Out of curiosity, and in the context of Latex, does anyone know of a good self-hosted collaborative web-based editor with preview? Or even something decentralized that works over WebRTC?

ketzu(10000) 4 days ago [-]

If you want one using latex, you can use open source self hosted overleaf variant. The worst part is the user management though, as they even keep somewhat sensible user management for the paid version.

superfist(10000) 4 days ago [-]

Personally I hate Tex/LaTeX and I think it should be replacted by something else long time ago. First of all syntax is horrible, if you don't work with it on daily basis try to figure out what macro you wrote one year ago is doing. Each time you have to jump to manual and learn almost everythig from very beginning. Next thing is lack of utf-8 and TrueType fonts supports (I know there is XeTeX and LuaTeX) but today such features such be in very core of text system no in some software branch. Extensibility is next thing and here again it is very poor unless you use something modern like LuaTeX or you are an expert in TeX macros. Packages dependency hell is next thing side by side with stupid compilation process with meaningless error messages. Am I the only one who think this way?

alanbernstein(4199) 4 days ago [-]

I feel like there isn't even a straightforward way to learn the principles of the syntax of LaTeX, in full. I've been an occasional user for years, I love the results, and I can do basic math typesetting off the top of my head. Anything more involved (diagrams especially), I really do need to start from scratch with some new package to get things working. Fortunately, stackoverflow tends to be a one-stop solution for me most of the time.

I don't believe I'll ever have a deep enough grasp to be able to do those more complex tasks on my own, and that really bothers me. I can imagine some syntax improvements, but like a sibling comment said, I don't see the mindshare shift happening any time soon.

porker(1075) 4 days ago [-]

> Am I the only one who think this way?

No. But redoing it all from scratch and getting mindshare is difficult.

Substance.io is one project I've followed for years who aim to do this, and the time, false starts etc show just how tricky it is. https://twitter.com/_mql/status/1202200085288935430

One could argue that Adobe InDesign, Quark XPress et al are the replacement. They have the typesetting and layout capabilities that common software lacks.

jimhefferon(4219) 4 days ago [-]

> Next thing is lack of utf-8 and TrueType fonts supports (I know there is XeTeX and LuaTeX)

The two halves of that sentence seem to be in conflict. Could you say more?

lisper(123) 4 days ago [-]

Why don't you just design your own syntax and write a little compiler for it that uses TeX as its back end?

brodo(3830) 4 days ago [-]

Check out SILE: https://sile-typesetter.org/ If you don't deal with math, it's pretty feature complete.

tambre(4104) 4 days ago [-]

Have you tried LaTeX3? I find it to be a huge productivity booster compared to the old TeX ways that I never managed to grok.

cbolton(10000) 3 days ago [-]

Next thing is lack of utf-8 and TrueType fonts supports (I know there is XeTeX and LuaTeX)

UTF-8 is supported by all modern engines. It's just that it wasn't the default for pdfTeX so you had to add one line to enable it. But this has changed in 2018 [1], so UTF-8 is now th default LaTeX encoding even when using good old pdfTeX.

[1] https://www.texdev.net/2018/03/25/latex2e-utf-8-as-standard/

andrepd(3726) 4 days ago [-]

Your comment reminds me of the people who shit on C++. Some of the criticism may be true, but there simply isn't anything out there that could possibly be considered a feasible replacement.


>lack of utf-8 and TrueType fonts supports

There's UTF-8 support in the standard pdfLaTeX compiler and TrueType support in XeTeX (which is far far from a trivial issue, which is why there are two branches, if you don't need TT fonts you're better off with pdfLaTeX which has more support for e.g. microtypographical adjustments).

todd8(1400) 4 days ago [-]

The problem with systems like Adobe Indesign is that they are "What You See Is All You Get". For a company brochure with special typography and particular Pantone ink colors used in offset printing there may be nothing better. However, I've owned a license to Indesign for over a decade. I've done tutorials, bought half a dozen books on it and maybe produced one document using it.

In Indesign I wasted so much time figuring out how to get the layout, the formatting, the figures, code samples, mathematics, bibliography, even page numbering, index and footers I wanted that I had to give up.

The power of bibtex, TikZ, and other packages in the TeX ecosystem make it possible to use a system that produces just the sort of documents that I want—and the software is infinitely less expensive (it doesn't cost anything).

TeX isn't without its difficulties. I've been programming for over 50 years and still dread diving into the macro language based code for complex packages—but at least it's available, something you can't say about these other proprietary systems.

TeX was written by a Computer Scientist (perhaps the most eminent Computer Scientist) and it shows. Its real power is revealed only to those able to program. This is a shame, because using it has made me appreciate good typesetting—something I've found difficult to achieve in the kind of papers I write when I use other tools.

I use MS Word when my recipient needs it that way, I use Apple Pages or Google Docs for simple documents when I don't care what they look like, and Markdown or Org mode for my own notes. For anything important or for something I want to have as archival source (TeX is a purposely frozen format) I use TeX and its related tools, LaTeX etc.

Give it some time, it may grow on you.

fivre(4061) 4 days ago [-]

While I don't doubt the utility of this at all, I am quite amused by the concept of WYSISYG LaTeX.

stabbles(4158) 4 days ago [-]

Some people I know literally share LaTeX math in emails and chats and assume my head has an internal LaTeX-compiler to understand it (usually I do). I guess this means people are really accustomed to writing maths in one go in LaTeX -- it's like the pseudo-code of maths.

pjmlp(298) 4 days ago [-]

Why? It is almost as old as LaTeX.

Not everyone enjoys programming their documents and visual editors for LateX written in Motif were the first ones to become available.

philistine(10000) 4 days ago [-]

This seems like an incredible tool to increase adoption of TeX systems. It solves the problem of: I want to try my hand at TeX but you're saying I have to install what now?

_emacsomancer_(3169) 4 days ago [-]

Overleaf.com is another place to try out TeX without installing anything. (The free version is perfectly fine for almost anything other than collaborating with 3+ people on the same project.) The web interface isn't as nice as a proper text editor, but it's not too bad.

GnarfGnarf(10000) 4 days ago [-]

I sat next to Prof. Knuth in 1982 at Stanford, while he did a demo of TeX on the university's DEC-10. I asked him what was next? He said: real-time, WYSYWIG display.

At the time, the idea struck me as utterly impossible.

todd8(1400) 4 days ago [-]

Back in the 80's I worked as an OS software architect. I used to do a mental exercise occasionally. I would imagine how I would design things if processors were infinitely fast. Back then we were still following Moore's law, even for single processor machines. Processor speed, memory size, disk speed, and network speed all put hurdles up for what we could do.

I remember talking to a colleague about the possibility of a TeX system that would rerender a page as it was typed in one screen and viewed in another. Ok

xyproto(10000) 4 days ago [-]

The future is now.

sabujp(10000) 4 days ago [-]

I'm getting a giant pink screen, this is broken.

Weebs(10000) 4 days ago [-]

Getting the same issue on the Master Thesis template in Firefox 71.0

1980phipsi(10000) 4 days ago [-]

Google drive and dropbox are blocked at work for me...It would be nice if they had an option for if you don't want to save your work. Like just an in-browser editor without saving. Sometimes I just might want to write up some Latex and then copy it into something else. Nowadays I usually need to open up Lyx to do the same thing.

1980phipsi(10000) 4 days ago [-]

Ah, so the sandbox mode on github is what works for me. However, there is no link for it on the main page.

Regardless, I suppose I was mistaken on what the project was. I assumed it was like a WYSIWYG version of Lyx in the browser. You're still writing Latex with this.

krackers(10000) 4 days ago [-]

The rendering happens in real-time when you type! Is this using pdflatex? Because I've never seen an editor with this low of a response time.

Only slight nit is the blue progress bar/page load bar that appears across the top of the screen while you're typing is annoying. And the baseline kerning of the fonts in math mode seem a bit off: $$x^2 + 2x + 1$$ has the x in 2x a bit raised

ulrikrasmussen(3971) 4 days ago [-]

It appears to be using a heuristic to update the PDF directly when you type, and then it periodically runs LaTeX to do global layout. Try putting the cursor before \LaTeX in the example document and input a few spaces. For me, it shifted the first characters into the later ones, perhaps because their heuristic couldn't detect that they are on the same line due to the vertical offset.

svat(4094) 4 days ago [-]

This is amazing.

• This seems to be their main page: https://www.swiftlatex.com/

• Not all the source code is on GitHub; crucially their modification of the TeX engine seems to be distributed only as the two `.wasm` binary files). Not sure if they plan to share more or not.

• As mentioned in the FAQ/docs page, this is the work of just two people from New Zealand (Gerald Weber and Elliott Wen), and they have a paper about it from 2018 ("SwiftLaTeX: Exploring Web-based True WYSIWYG Editing for Digital Publishing", DOI: 10.1145/3209280.3209522). Based on a quick skim so far, the paper looks fantastic, looking forward to reading it in more detail.

• In the paper, Figure 5 and the surrounding text describe how TeX was modified (the part of most interest to me); it's really clever! To avoid modifying the data structures and introducing new bugs, they hook only into TeX's internal allocation functions for tokens. (TeX as originally written by Knuth does not use malloc() or equivalent; it does all its own allocation out of giant arrays called 'mem' and 'str'.) They can then look up this bookkeeping when the token lists are being shipped out to PDF format.

• Looks like it has some limitations as far as PS/PDF specials goes (aka "drivers" in the TeX world), so TikZ or tcolorbox don't work too well for example. However my guess is that this is just an issue with their PDF rendering (per the paper they use something like Pdf2htmlEX rather than pdf.js, for speed), and not a fundamental issue.

• But otherwise most of the standard LaTeX features and packages seem to work (labels and cross-references, etc); you can \usepackage anything and it will download the corresponding files but no data leaves your system; everything happens in the browser. Heck I even pasted in xii.tex (without the final 'jbye') and it works (can click on "partridge" in the PDF and go to the corresponding part of the source).

• This sort of WYSIWYG editing for LaTeX has been done in a couple of proprietary systems before (BaKoMa TeX / Texpad), and some ancient systems as well (VorTeX), but they've been buggy in my limited experience. There was also a very impressive demo at this year's TUG meeting, by David Fuchs (who Knuth described as his "right-hand man" on the TeX project). All these projects have had to grapple with the same issues (achieving quiescence etc). This one seems to have its share of minor bugs (some artefacts seem to be visible in their published paper too!), so e.g. a feature to fully update the PDF after a (very) long typing pause (or manual user request) seems desirable. Nevertheless it's very impressive as it is.

• I think some sort of engagement with the TeX community (the mailing lists at http://tug.org/texlive/lists.html etc) may help: it appears their code is currently based on pdfTeX; they should probably consider XeTeX / LuaTeX as well (given that the doc page at https://swiftlatex.readthedocs.io/en/latest/ mentions "Lack of Unicode Support"). There are experts there with some idea of corner cases, the weird things that users want, etc. Hope this becomes part of the TeX mainstream (what little there is of it) to benefit all users (good typesetting for everyone!) and not some sort of edge case that dies when/if the authors lose interest.

Overall, am really awestruck by all this. Congratulations and good luck to the authors!

bouvin(4184) 4 days ago [-]

Their paper even won the Best Student Paper award at DocEng 2018.

askef(10000) 4 days ago [-]

Modified source is here: https://github.com/SwiftLaTeX/PdfTeXLite

jfk13(2852) 4 days ago [-]

> • Not all the source code is on GitHub; crucially their modification of the TeX engine seems to be distributed only as the two `.wasm` binary files). Not sure if they plan to share more or not.

PdfTeX is GPL-licensed, so if this is derived from pdfTeX (is it?), I assume they'd be required to make their source available. (Note that the GPL says that 'the source code for a work means the preferred form of the work for making modifications to it', which I don't think would be a .wasm binary file.)

irrational(10000) 4 days ago [-]

>crucially their modification of the TeX engine seems to be distributed only as the two `.wasm` binary files

I thought one of the selling points of webassembly is that wasm files must have a textual format that anyone can read the source.

amichail(425) 4 days ago [-]

How is this better than TeXmacs?

ChuckNorris89(4216) 4 days ago [-]

This is indeed amazing.

I'm interested if it can be self hosted for using in a commercial environment. This would be a killer feature for increasing Latex adoption at corporations stuck with MS Word.

kfl(4189) 4 days ago [-]

It seems that the source for the modified pdfTeX can be found here: https://github.com/SwiftLaTeX/PdfTeXLite

porker(1075) 4 days ago [-]

LyX [0] eat your heart out.

It's good to see competition in this space as LyX's development has slowed the last few years. I still like it, but will be interested to try this alternative.

0. https://wiki.lyx.org

upofadown(4201) 4 days ago [-]

LyX is WYSIWYM (What You See Is What You Mean). So an entirely different sort of thing. I guess you could argue that there is really no point to doing a WYSIWYG editor on top of LaTex. If it entirely works then the LaTex is just a internal layer that adds nothing but pointless complexity. WYSIWYM is the primary reason that people bother with LaTex in the first place. It's for the people that want the computer to do the work of laying out the text.

codeduck(10000) 4 days ago [-]

I use Lyx as my primary tool for writing. I know it's clunky but it's as reliable as the tides.

savolai(10000) 4 days ago [-]

I'm getting this on ios Safari when creating document:

'Oops Error Detected! Looks like there was a problem when creating the project: DataCloneError: Failed to store record in an IDBObjectStore: BlobURLs are not yet supported.'

zekrioca(4214) 4 days ago [-]

It went very smoothly on Firefox.. Maybe this is a thing with iOS/Safari?

ColinEberhardt(3560) 4 days ago [-]

This tool is very cool - however it appears to be written in JavaScript. I can't see any evidence of the use of WebAssembly.

conorliv1(10000) 4 days ago [-]

The typing update speed blows Overleaf out of the water. I've used Overleaf for the past 3 years. It's a great product, but one of my biggest complaints is slow LaTeX rendering. If this product were a bit more polished I would use it instead of Overleaf.

LolWolf(3621) 4 days ago [-]

You should check out TexPad![0] It's lovely and essentially real-time, with autocomplete, etc. I use it as my editor of choice for all of my papers (not that any of them have particularly challenging layouts).

Really cannot recommend it enough. :)


[0] https://www.texpad.com/

fg6hr(10000) 4 days ago [-]

A web grandmaster see I. Office365/GoogleDocs should be very interested in this stuff and by 'very' I mean 50 millions at least (they burn way more money on complete bs projects). It's very likely that the author knows more than I do, but it seems reasonable to work out some sort of dual-licensing deal: one for corps who want to take it and develop further and one for smaller businesses who are ok with saas-ish solution.

throwGuardian(10000) 4 days ago [-]

I'm a big fan of LaTex, having used it for my thesis, presentation (beamer) and a few peer reviewed articles/jounals.

With that disclaimer I can safely contend that Google/MSFT will not be interested in this. LaTex is for academic typesetting, with a special focus on math-y content. Even with WYSIWYG, every now and then one will need to get into the weeds of Tex syntax, which is simply not for the average computer user.

As for academia, they will not pay a Google/Microsoft for LaTex use when they might as well use Lyx/Kile or their favorite editor with some syntax highlighting support for Tex. And unlike regular note taking, academic writing isn't so spontaneous that you start editing your IEEE manuscript on the pot, while on your phone.

My guess is that the FAANG tribe have little incentive to commercialize this, and hence acquire it

cm2187(3384) 4 days ago [-]

I agree, word and powerpoint would gain a lot from having a side by side markup/WYSIWYG experience. My definition of good is the visual studio wpf editor where you can entirely live in markup or UI or do both at the same time.

But the reality is that outside of making office run on other platforms, there has been near zero innovation from Microsoft.

thekingofh(10000) 4 days ago [-]

Can someone enlighten me on the benefits of LaTeX? I've never run across it or someone who uses it.

chabons(10000) 4 days ago [-]

Some people use it for everything, but I think it really shines in writing documents with heavy mathematical content due to the straightforward math syntax.

There was a paper posted here a couple months ago that claimed that even expert LaTeX users wrote documents slower than novice Word users, but the interesting caveat is that LaTeX users were far faster across the board when the text included a lot of math.

ska(10000) 4 days ago [-]

It's a markup language with a long history I won't get into. But that means it at least aims at separating content and presentation. Relative to producing a document in something like Word, it has a few real strengths. It's very good at:typesetting mathematical notation, typesetting different languages properly in the same document, and pretty good managing large complicated documents. It tends to do hyphenation breaks and spacing better than word processors.

You can easily solve versioning and collaboration issues because the input is plain text (like source code, just version control it and use patches, PRs, etc.) This works better than 'track changes' in practice, especially with multiple authors.

So those are the upsides. Downsides: it's a bit esoteric, and clunky for lots of documentation tasks. The implementation is complex and the package/module system can step all over itself. If you have output that is nearly but not quite right, it can be a real bear to fix it.

These days Markdown may be a better choice for a lot of simple documentation tasks where you don't care much about the final output presentation details. Any word processor can probably let you do some things more quickly.

When I was a grad student I knew multiple people in STEM fields who tried to do their thesis in Word (or equivalent) and gave up in frustration, moving it all to LaTeX. I never knew anyone who successfully went the other direction. I don't know if that is still the case.

If you use it a lot, you may find yourself wanting to use it for everything (letters, resumes, presentations, etc.) but many of those things aren't particularly strengths. If you don't use it a lot you will find it hard to come back to casually.

One other strength I should mention, especially for automated documentation. Knuth was very particular about stability in TeX, and LaTeX has mostly-kinda-sorta followed this philosophy. So unless you have used a lot of marginal packages or something, it's entirely reasonable to expect that processing a 20 year old input will work fine using current builds. This is not something you can say of most systems.

Slartie(4126) 4 days ago [-]

One of the key aspects about LaTeX that made me very productive with it was that I could just dump lots and lots of text pretty much directly from my brain into the plaintext editor, without thinking much about the intricate details of the layout. I suspect that an important 'feature' that enabled this was the fact that I didn't have the exact layout present and visible in front of me all the time, but just the semantic aspects of it - until I manually triggered a compilation to actually get everything laid out.

However, this is still an awesome editor, and I would probably have loved to use the near-instant WYSIWYG updates for more complex and layout-sensitive parts of papers, like tables and such. I'd just wish this editor would allow to disable the distracting layout rendering completely for 'dump-text-from-brain' phases.

mFixman(10000) 4 days ago [-]

I agree. Brain-dumping `\begin{float}[t]` while writing seems more natural than thinking 'go to this menu, click this option, see if it inserted the float, and choose 'top' in its location properties dialog'.

PeterStuer(10000) 4 days ago [-]

For me it was the reverse. Layout and content always went hand in hand, with both engaging and influencing each other.

I have noticed these different approaches, text as 'data' irrespective of presentation and text as an element part of a holistic piece of work, to be strongly preferred by different people and either of them struggling to be productive when mismatched.

Historical Discussions: Show HN: A searchable list of self-hosted software with screenshots (December 05, 2019: 638 points)

(645) Show HN: A searchable list of self-hosted software with screenshots

645 points 4 days ago by techindex in 10000th position

selfhostedsource.tech | | comments | anchor

Webbased, open source application for standardsbased archival description and access in a multilingual, multirepository environment.

stable - Updated 5 days ago

The most recent release v1.3.2 was published 4 years ago and has a status of stable

The latest commit was 5 days ago

Read more

All Comments: [-] | anchor

edf13(3746) 4 days ago [-]

Nice list - not sure the screenshots add a great deal though.

woodrowbarlow(10000) 4 days ago [-]

agreed. they're just screenshots of each project's homepage. not a single screenshot of the actual software.

leemailll(3938) 4 days ago [-]

the screenshots are a nice touch, but I still prefer https://github.com/awesome-selfhosted/awesome-selfhosted

techindex(10000) 4 days ago [-]

It's the same list, just searchable, with some extra meta information scraped from the github repos and the project websites

woodrowbarlow(10000) 4 days ago [-]

the screenshots are all just a screenshot of whatever the project's listed homepage is. not a single screenshot of the actual software. these screenshots are pretty useless.

bullen(10000) 4 days ago [-]

The biggest problem for me (that make multiplayer games that are latency sensitive) is that you can't get self hosted global presence.

Otherwise my stack is completely self made and can run on a Raspberry: http://github.com/tinspin

jlg23(3943) 4 days ago [-]

> self hosted global presence.

Could you elaborate what that is supposed to mean?

badrequest(4054) 4 days ago [-]

Why is something like this included? https://selfhostedsource.tech/p/algorithms

Am I meant to put this software on a computer and run it 24/7? Why?

techindex(10000) 4 days ago [-]

You've stumbled on the python list: https://lucidindex.com/python

It's not 'technically' part of selfhostedsource, but there's one database shared between selfhostedsource and lucidindex so it's pretty easy to get results from other lists

Lucid Index will eventually index all the 'awesome' lists, but there are a few others already functioning here: https://lucidindex.com

recrudesce(10000) 4 days ago [-]

Website constantly times out for me, stuff doesn't load etc. Not sure if that's because of traffic due to this post or not.

techindex(10000) 4 days ago [-]

Definitely the traffic. Wasn't expecting the front page. Working on it now.

vesinisa(3157) 4 days ago [-]

For me, on all search results, only the first page loads. Second page is an empty document:


(Not sure if a bug or due to load.)

max23_(4124) 4 days ago [-]

Probably HN hug of death.

anderspitman(1760) 4 days ago [-]

This looks really nice.

In principle, I'm a big advocate of self-hosting (one of my services is even on this list). In practice, it just doesn't work for me. Once I get beyond 2-3 services it's just too much hassle to keep track of everything.

The key realization for me is that I don't actually care too much where the software is running, or who is running it for me. What I do care about is a avoiding vendor lock-in. As long as I have a reasonable escape hatch if my service company starts doing things I don't like, that's good enough. This keeps them honest. My issue with the current crop of monoliths like Google services is that there's no obvious migration path if you get fed up with them, so you're pretty much stuck with them no matter how crappy their software or customer service is/gets.

That's why I think something like sandstorm.io or cloudron is the future of self-hosting, at least in the near future. Maybe eventually we'll have a substrate of simple protocols and practices that will make it reasonable to manage everything yourself, but we're not there yet.

bshipp(10000) 4 days ago [-]

I found docker was perfect for self-hosting services with relative ease. upgrades are self-contained, security is simplified behind a reverse proxy like nginx, and there are very few dependency conflicts to worry about. I've got about 40 containers running at any given time and barely think about them at all.

heavyset_go(10000) 3 days ago [-]

> Once I get beyond 2-3 services it's just too much hassle to keep track of everything.

Docker Swarm shines in this regard because it's easy to setup, and its simplicity allows you to just forget about it. I run 30+ services behind a reverse proxy for personal use and don't need to keep track of anything.

m_fayer(4119) 4 days ago [-]

I know many non-technical small-scale entrepreneurs who would love to use OSS self-hosted tools to run the basics of their businesses and get away from BigTech. I'm talking little shops, yoga studios, restaurants, recording studios, etc. All they need is basics - email, calendars, maybe some shift management, inventory management, and obviously documents. These people are completely non-technical and I don't know of any tooling that would let them set these things up quickly and reliably.

I fantasize about running a little consultancy that would set up and maintain a tailored package of self-hosted OSS software for such small businesses. But I haven't actually studied whether there's a workable business model to be had, or whether there's enough quality self-hosted software out there to adequately cover the needs of most small businesses. I'm curious if HN thinks this could be a viable business...

scrozier(10000) 3 days ago [-]

Having started my career in this space, many years ago, I've grown to think there is not a good business model to be had. It's too bad, because there's a huge demand. These small companies wind up with bad solutions all around.

fonosip(4186) 4 days ago [-]

We are giving this a shot at https://ba.net/private-cloud-office

gpdpocketer(10000) 3 days ago [-]

Totally easy to step off the net. You just put bits of it in your own space.

I wanna write a book titled: 'how to run your small/medium-sized office with just a linux box on the wall and your staffs cell phones...'

Pretty sure I read a FAQ for how to do this, 20 years ago, though...

_zamorano_(10000) 4 days ago [-]

To me, most small bussiness would suffice with a cheap Hostgator/Goddady webhosting with cPanel.

Heck, I personally 'mantain' (not that I do much), all my company employees email address, different domains, a couple of simple webpages, calendars, issue tracker...

All without doing much apart from setting the anti-spam, and installing the Installatron applications I need most.

halfeatenpie(10000) 4 days ago [-]

There is definitely a market, but I'd say it's a difficult nut to crack. You first want to do a quick cost-benefit-analysis first to make sure you're making an informed decision.

We won't be factoring privacy, working with a big company, or other concerns in this thought. We will only focus on the numbers.

Lets look at email/productivity market real quick.

Google's base package is 6 dollars a month per user with all the backups, support, and infrastructure in place (realistically, you'll want the 12 dollars a month package).

Since email is critical infrastructure (I believe it's one of the most critical elements of a business), lets say two DigitalOcean VPSes at 6 dollars a month each (one primary, one failover) with a license of HostinaBox/Vesta/Whatever Open Source Solution you use with DigitalOcean's backups enabled. Not the 'best' design but something I'd say is OK and within scope for a small business. That's 12 dollars a month base costs + your consulting time for a sizeable capacity.

For a small business of 1 person, they get a better deal by just going with Google. Google has their applications and softwares easily integrated with their other services, comes with their productivity suite (GSuite == Google Docs, Drive, etc.), and as a small business who is probably risk adverse in their decisions when it comes to these things, feel more comfortable working with Google.

For a small business of 5 people, I'd say it's still more worth it for them to use Google as that's 30 dollars a month (most consultants charge more than that an hour). If they hire a consultant and if the poo hits the fan, then they'll be paying a consultant money to execute the disaster recovery plan. Even if you did take them on as a client, that's a maximum of 18 dollars a month you get to keep (assuming no issues/errors happen).

For a small business of 50 people, then now it gets to an interesting territory. However, for 50 people I'd change up the base server/system configuration to have higher capacity, more fault tolerant, and resilient under disaster scenarios (which would increase base operating costs).

I'd say this really depends on marginal benefits and based really on relationships established with your clients. In the end, you can probably make some $$$ but your time and effort might be spent on more productive and lucritive tasks. This is also assuming that the self-hosted OSS software is of quality that the clients will be happy with. I'd argue Google's mail offering may have annoying/restrictive spam policies and be frustrating at times, but they have a high quality product made at an affordable price point. The variation in quality of OSS products concerns me as well as developers who are probably overworked and underpaid for their contributions asked to make changes to support clients they're not directly paid by.

As a risk averse business, I'd rather rest my eggs in the Google/Microsoft/whoever basket and directly work with the 'entity' that maintains the codebase (or has the talent/expertise on-hand to make adjustments) rather than a middle intermediary of equal level but, in the end, is subject to the decisions and leadership of the OSS product.

Now take this to another step and say you build a consulting company that handles all of these as a one-stop-shop? Well... Then I don't see anything new service/model here than what a local IT consultant company can offer.

So to really make this work I'd say a shop that automates these deployments on-demand and offers a large selection of applications to use is probably the best step forward. Even then, I don't really see the viability of this on a funding perspective except for scaling. In the end in my perspective, the opportunity is there but it'd be hard to do it right. Also funding the developers of the software you're making money off of would be great, but that's a whole nother thing (and I can squabble about that for hours).

Quick plug: two very good friends of mine are in the process of tackling a similar issue via their venture[0]. I am a customer of theirs but have been friends with them even before this venture. Really recommend their product as an affordable and reliable product that 'just works'.

[0]: https://mxroute.com/

jotm(4152) 4 days ago [-]

Sounds like a good idea to me. Most people would go with the big company in my experience, but your niche is huge anyway. WPEngine seems to be doing alright still (I remember thinking no way would anyone pay for just WordPress hosting... I was wrong), as are the many small website/app/hosting providers/consultancies.

ocdtrekkie(2741) 4 days ago [-]

A few of us are trying to get development moving again on Sandstorm.io. It's a super handy (and default-extremely-secure) piece of software, it just needs a little nudge of developer interest. It's original target was the enterprise space, but it's an awesome self-hosting platform for privacy nerds too.

The OSS is out there, but you've gotta wrap it in enough design work to make the experience comparable to closed source offerings: Most businesses aren't going to want to pay for worse software.

ridruejo(1733) 4 days ago [-]

https://bitnami.com/stacks helps quite a bit (disclaimer, I am the cofounder)

jannes(3857) 4 days ago [-]

Synology NAS devices offer a few of features that you mention: email [0], calendar [1], document sharing [2], chat [3], office apps [4]

Shift/inventory management would need to be added via third-party software (or implemented in spreadsheets).

I've heard of companies setting these up for small businesses. [5]

[0] https://www.synology.com/en-global/dsm/feature/mailplus

[1] https://www.synology.com/en-global/dsm/feature/calendar

[2] https://www.synology.com/en-global/dsm/feature/drive

[3] https://www.synology.com/en-global/dsm/feature/chat

[4] https://www.synology.com/en-global/dsm/feature/office

[5] https://www.synology.com/en-global/wheretobuy/United%20Kingd...

codingdave(10000) 4 days ago [-]

I did similar work between gigs a couple years back. There is absolutely a customer base for such things. But they don't have tech budgets - even pricing our services fairly cheaply, they would balk at the cost.

So there may be a business opportunity, but you need enough scale that each single customer is happy with the price.

gramakri(4205) 4 days ago [-]

Please give Cloudron (https://cloudron.io) a try. We provide a solution that makes it easy to self-host apps. We provide Ghost, Rainloop, Nextcloud, InvoiceNinja, GitLab, Rocket.Chat among other apps. Full list here - https://cloudron.io/store/index.html

Disclaimer: I am the co-founder

yeswecatan(4023) 4 days ago [-]

Great list. I started using Firefly to track my finances back in October. Loving it so far.

MrZander(10000) 4 days ago [-]

How well does it integrate with other financial services? I currently just use Mint because it's the only app that can consistently pull in all my accounts into one place.

oefrha(4042) 4 days ago [-]

From https://selfhostedsource.tech/about:

> Lucid index sources it's data from curated lists of software compiled by volunteers. The specific lists used are:

> - Awesome Self Hosted[1]

> - Awesome SysAdmin[2]

[1] https://github.com/Kickball/awesome-selfhosted

[2] https://github.com/n1trux/awesome-sysadmin

koheripbal(10000) 3 days ago [-]

My biggest factor in choosing an Open Source Software platform is the size * activity of the community.

I think it would be a huge improvement to give me a sense of whether the product is well supported.

There are a few reasons for this...

1. I want to choose an OSS that will have security vulnerabilities resolved in a timely fashion.

2. I want the product to keep up with the times and changing landscape.

3. I want to be able to converse usefully in a forum to resolve issues.

4. I don't want to search for a new replacement product and undergo a migration because all the devs have abandoned the project in x years.

ronyfadel(10000) 4 days ago [-]

Neat! I'm wondering if you're looking to monetize this, and how? I'm working on a curated list of [not software] and wondering about how I can monetize it.

techindex(10000) 3 days ago [-]

I have a link to one of my other projects on the site - https://duetapp.com

Beyond that, I don't really have any specific plans to monetize.

JackPoach(2710) 3 days ago [-]

Bitrix24 isn't on the list, even though self hosted editions are available since 2008 - https://www.bitrix24.com/self-hosted/

Am I blind or there is no 'Submit' section on the website?

Historical Discussions: No to Chrome (December 06, 2019: 615 points)

(624) No to Chrome

624 points 3 days ago by dredmorbius in 199th position

notochrome.org | | comments | anchor

We can no longer pretend that Google is a positive force in the world.

There is a simple first step that every internet user can take to make things a little better. Seek out a better web browser to replace Google Chrome and tell everyone to do the same.

No to Chrome is designed as a starting point for anyone who uses the internet to send a message to Google that their relentless disregard for our rights, dignity, democracy and communities will not be tolerated.

There are many ways protest against Google ranging from Tweets to full boycotts but No to Chrome is designed to be for anyone who uses the internet to participate easily and immediately.

All Comments: [-] | anchor

aykutcan(10000) 3 days ago [-]

That claims about Youtube. Especially 'YouTube has contributed to a growth of the flat earth conspiracies at the expense of scientific fact.'

Do you understand how algorithm or statistics works?

They are giving people what they want. Freedom. People is watching that videos and made them more popular. They are free to share their thoughts.

Even google can't predict all the negativeness and prevent them with computer systems. Expecting being a god from google is unfair.

I think this is hate for google more than arguments against google.

I dislike chrome and using Firefox for a long time. But i don't think this is objective and completely true.

davidu(3932) 3 days ago [-]

You know this is a very superficial understanding of what has happened right? And it doesn't capture how the design of the algorithms are actually shaping peoples views? The idea that it just gives you what you want is just not an accurate understanding of what is happening.

I suggest you read the work being published by Stanford U and Renee DiResta -- https://twitter.com/noUpside

brianzelip(4064) 3 days ago [-]

Does anyone know the Firefox dev tools equivalent of Chrome dev tools 'JavaScript contexts' drop down menu? I'm not sure how to describe it, but here are two screenshots from a tutorial where I saw it:

1. With mouse hovering over the drop down menu showing the displayed tool tip


2. With clicked on drop down menu showing the 'javascript contexts'


brianzelip(4064) 3 days ago [-]

Oh, by 'js contexts' do they mean 'all the javascripts related to the page?' If so, then I guess this file tree-like explorer of js is the Firefox equivalent?


sombremesa(10000) 3 days ago [-]

This button on the top right is the closest you get, I think:


digitarald(10000) 3 days ago [-]

DevTools member here: We have a more powerful target switching (hidden) in the toolbar but are also working to better expose JS context switching early next year.

Debugger's new threads panel also solves this partially when you pause in a thread.

Are your use cases extensions, workers or iframes?

petjuh(10000) 3 days ago [-]

Is Chrome a bad browser? I feel that unlike IE6 vs Firefox, Chrome is not inferior to the competition and in fact moves faster, not slower towards innovation.

davedx(2643) 3 days ago [-]

Chrome is technically excellent software. This is all about how the company Google conducts its business.

theklr(10000) 3 days ago [-]

Some times chrome is dictated by googles profit interests and likes to brute force implementations for their own interest and masking as making the web better when we have consortiums that they can contribute to. It's not the speed of innovation it's the why of speed (especially with things like PWA (3/4 years ago), AMP, and Houdini).

lcall(4221) 3 days ago [-]

I still use Chromium (and Iridium, a derivative that hopefully doesn't send info to Google), specifically on OpenBSD, for reasons summarized here (lower chance of privilege escalation, limiting bad behavior): https://news.ycombinator.com/item?id=21566041

(...and discussed further in the parents of the above link, like: https://news.ycombinator.com/item?id=21559122 or the full recent related discussion here: https://news.ycombinator.com/item?id=21557309 ).

Given which, I might switch to Firefox for some uses after the next OpenBSD release where it will have pledlge/unveil support (preventing it from accessing the computer beyond config-specified limits).

Edit: One thing I wish I knew about firefox is a way, without extensions/add-ons, to limit which sites can use javascript/images/etc., and/or to open multiple config tabs at once to quickly turn those on by exception for occasional specific sites, as I do with chrome. Exception lists, even better. This was discussed a little bit at those above links.

RonanTheGrey(3738) 3 days ago [-]

> to limit which sites can use javascript/images/etc

The new privacy tools allow you to tune this from the url bar, but I think it only applies to cookies, javascript and trackers.

> and/or to open multiple config tabs at once

yeah that's the sticky part. It only allows one open config tab unfortunately.

azangru(10000) 3 days ago [-]

Chrome/ium is a wonderful web development environment. Can't see myself moving away from it any time soon.

fastball(4141) 3 days ago [-]

Use Brave. All the good without any of the bad.

digitarald(10000) 3 days ago [-]

Firefox DevTools member here. If you have tried them recently, I'd love your feedback on what's blocking you.

sojournerc(10000) 3 days ago [-]

I used to think the same, but now do all web dev in Firefox. For my purposes it has parity with chrome, and works just as well.

Also means if it works in Firefox it will most likely work in chrome. The opposite was not always true.

ssn(2706) 3 days ago [-]

Contributing the diversity of the web is an important action that all of us should be committed to.

Read Zeldman's 'Browser diversity starts with us.' http://www.zeldman.com/2018/12/07/browser-diversity-starts-w...

AnIdiotOnTheNet(3914) 3 days ago [-]

The battle is already lost. There's really only two serious options today for browser engines, which is even fewer than there are for OSs! And the reason is the same: it is far too complicated to create new implementations of either.

monkeynotes(10000) 3 days ago [-]

Losing Chrome doesn't necessarily stop you being tracked. If you make a vacuum it will be filled by someone else. I am not suggesting apathy but opting out just makes holes for someone else to fill. And as we have seen in the past competition is great but it quickly gets amalgamated behind the scenes, just look at the alcohol industry. All those brands look like healthy competition, but it's largely one parent company.

You can imagine that a young Instagram or What's App could have been on the safe list of alternate social platforms, but Facebook bought them both. Firefox isn't impervious to a change of management and ownership 5-10 years down the line.

So what you are left with is using esoteric applications and social spaces that no one really cares about. That's analogous to living off the grid. It just isn't really suitable to the majority of people and as such it isn't a solution at all.

The problem isn't Chrome or Google per se, it's technology. We should be pressuring the big guns to conform to our needs, not opting out and abandoning the people who benefit from large social platforms and convenient, well supported applications.

nine_k(4189) 3 days ago [-]

The logic is simpler.

The makers of Firefox (and apparently Safari) do have an incentive to keep your privacy.

The makers of Chrome have an incentive to keep you tracked.

So it's not the technology, it's business models. (Basically it's the rude awakening to TANSTAAFL.)

Pigo(10000) 3 days ago [-]

Try buying glasses that aren't made by Luxottica. I really wish we still cared about monopolization.

paulryanrogers(4092) 3 days ago [-]

> The problem isn't Chrome or Google per se, it's technology. We should be pressuring the big guns to conform to our needs, not opting out...

How do you propose to do that without boycotting? I'm all for regulating overly large corporations, but until that happens I can also vote with my feet.

onesmallcoin(10000) 3 days ago [-]

I work with chromium using chromes debug protocol (CDP) to do automation. You'd be suprised how much of the browser your dealing with as a facade- they sell things like a headless browser that don't oblige by the most basic of requests given to the browser e.g hide all the scrollbars that in addition to using double the resources and taking twice as long to preform any operations I tell you that thing looks shiny on the surface but their is nothing their I think the work should be done on building and making webkit, blink and gecko better as the referece implementations before we try and yet again make something shiny that can barely do the job it should be

onesmallcoin(10000) 3 days ago [-]

also no support for plugins web interfaces on chrome headless meaning every time you want to test a plugin you have to bring up a virtual X11 server to deal with it

elchin(10000) 3 days ago [-]

Isn't Google the biggest donor to Mozilla foundation?

rebelwebmaster(4169) 3 days ago [-]

Google pays Mozilla Corporation to be the default search engine in Firefox. I wouldn't classify that as a donation, though.

lavishsaluja(10000) 3 days ago [-]

if you consider being a client & paying as a donation, yes!

quotha(4117) 3 days ago [-]

Does anyone use the Brave browser? I started a few days ago and it's a pretty interesting idea.

kgwxd(2421) 3 days ago [-]

Traffic/content blocking is better left to a third-party. Brave supports extensions but will be just as broken as Chrome when Chromium gets crippled. If site owners and ad networks want to make deals with each other, leave me an my machine out of it, like the rest of the advertising world does.

scholia(1225) 3 days ago [-]

Search the page and you'll find several mentions of Brave.

What I want to know is why someone would pick Brave instead of the Epic Privacy Browser...

... though personally, I think it's far better to support Firefox as it's now the only viable alternative browser that isn't based on Chrome.

Chrome is now so dominant that people can just ignore open web standards and develop for Chrome instead. This is a Bad Thing.

wumms(10000) 3 days ago [-]

> "Setting up such a "Google free" phone requires a significant amount of time, dedication, and skill (not to mention a personal server), definitively far beyond to what I'd trust my mother to comprehend." [0]

@ project owner: I'd consider changing (or replacing) the quote to use age and gender neutral language; e.g. '[...], definitively far beyond to what I'd trust the average user to comprehend'.

[0] https://notochrome.org/google-products/android-mobile-os/

Edit: His mother might not be able to root her phone, but my friend D.'s might be forking LineageOS for fun - so please don't generalize.

zozbot234(10000) 3 days ago [-]

The proper age- and gender-neutral language is of course 'Aunt Tillie'. /s

013(10000) 3 days ago [-]

If the project owner is referring to their own mother, why would they need to use age and gender neutral language?

Antoninus(10000) 3 days ago [-]

I'm quite satisfied with my move away from google products. Moved from gmail to protonmail. Chrome to firefox and search to ddg. Ddg isn't perfect though and I find myself using !g tag more often than I'd like.

floriol(10000) 3 days ago [-]

I recently switched to qwant, seems to be a decent search engine, owned by a European company (as far as I know ddg is american, so pretty weak legal protection of user data). Though I also occasionally have to search on google.

Diederich(10000) 3 days ago [-]

> Moved from gmail to protonmail

Obligatory mention of https://fastmail.fm/ I've been using the paid version (it's quite inexpensive) for over a decade now and it remains fantastic.

GuB-42(10000) 3 days ago [-]

How do you search through mail in protonmail? Due to the nature of end-to-end encryption, online search features are very limited. You can't build an index if you can't read the data.

Do you download everything and search offline? Or maybe, unlike me, you don't rely on search so much?

mkbkn(3794) 3 days ago [-]

Same for me but in case of search I prefer to use Ecosia.org over DDG. It plants trees from its revenues and privacy like ddg.

cyberpunk(10000) 3 days ago [-]

Try !s instead -- same results as google (apparently) but via startpage which supposedly doesn't track..

I don't really understand how startpage is able to operate such a service, I assume it's maybe because if google started trying to stop them it would be rather hypocritical?

tomaszs(10000) 3 days ago [-]

Less Google, more internet. Simple as that

mxuribe(4027) 3 days ago [-]

If by 'internet' you mean the more traditional thought of internet with all of its decentralized goodness (and services and such), then YES 1000 times!!

azdv(10000) 3 days ago [-]

Interesting, I just moved back to Firefox after many years of using Chrome.

I have to admit, it has gotten a lot faster recently, Firefox sync is working very well, and the Firefox Android app is a pleasure to use (being able to install uBlock origin is a huge plus).

Anything else I missed on Chrome was easily solved by an addon, or a small tweak on userChrome.css (customizing the browser interface by overriding its CSS is amazing btw).

Braggadocious(10000) 3 days ago [-]

it's literally the same engine as chrome. the only reason i use chrome is for the dev tools.

giancarlostoro(3177) 3 days ago [-]

> Firefox sync is working very well

Been using it for years and years, I never could adopt Chrome. First when it came out I tried it, but no adblocker, then when there was one, it was too crippled, my guess is that this was intentional since they're going back to crippling adblockers. Now we have network level adblockers which I think is the best approach.

I only use Chrome cause some kiosk-like application I'm developing will ship through Chrome (not Electron) / WebKit. Also my boss prefers to see it, so I demo with Chrome mainly, but I do all my real browsing on Firefox. I'm a geek / developer so I have Firefox and Chrome on every system I have. Except on Android, unless it's preinstalled for me, I wont get it.

oefrha(4042) 3 days ago [-]

I can't solve highlighting search results in the scroll bar, and apparently I search a surprising amount and am absolutely crippled without scroll bar highlighting, so I'll stick to Chrome until I can't (e.g. when content blocking is crippled).

jeegsy(10000) 3 days ago [-]

Has firefox solved its long standing issue with the rendering of radio buttons?

atomi(10000) 3 days ago [-]

On Firefox, one thing that really bothers me still is that it's impossible to remove a root domain entry from the omnibox.

Like if I type 're' it completes to reddit.com even though I've deleted that entry. Chrome will respect the deletion and complete to reddit.com/r/all.

mtone(10000) 3 days ago [-]

Also happy with my move to Firefox, it surpassed my expectations. Two (small) pain points have been:

- customizing userChrome.css .. It was a bit time consuming to figure out first (I wanted to remove the top bar and use vertical tabs). Happy to be able to do it, but it's not exactly user-friendly.

- Rejection of Windows local certificates store that requires fiddling with a setting to enable. If my certificate store is compromised Firefox is not going to save me. I don't recommend it over Chrome at work since we have one of these for a local server and I'd rather not generate support tickets.

Both of these are not part of Firefox Sync and require manual handling on each PC.

pyython(10000) 3 days ago [-]

Switching to Brave has worked out great for me. All of my extensions work, browsing experience is almost exactly the same, etc. I love the built-in privacy features (Tor is a keyboard shortcut away!). The one issue that I've run into is that the built-in ad blocker is a little aggressive sometimes to the point where it breaks page functionality. At those times, I just deactivate it for that page and go about my business.

flavius29663(10000) 3 days ago [-]

Brave is still based on chromium/blink. You might get more privacy, but you're still supporting the one monopoly of the web, Google deciding over all the web standards going forward. That is the real bad thing

ossworkerrights(10000) 3 days ago [-]

I am quite satisfied with using chrome. Once there will be suitable alternative i will gladly move away from it tho, just a breath of fresh air.

alex_duf(4115) 3 days ago [-]

Out of curiosity what makes Firefox or any other web browser fail to be a suitable alternative to you?

husainalshehhi(10000) 3 days ago [-]

One of the things I like about google chrome is the omnibar: i can search within a website directly from the bar. For example, I can type amazon.com<TAB> and then I can directly search. This also works for many of my company's internal websites. Can firefox do that?

eythian(10000) 3 days ago [-]

Not sure what's built in, but you can definitely make it do that by right-clicking on a search box and making a shortcut. I have 'wp' for wikipedia, 'yt' for youtube, etc.

ropiwqefjnpoa(10000) 3 days ago [-]

Even if you have no issue with Google, this is still a good idea to encourage competition. Honestly, Chrome is not as good as it used to be anyway. For me, Google Maps and Waze are the hardest things to kick. And YouTube...

michaelbrooks(4171) 3 days ago [-]

I wish someone could create a viable YouTube competitor, seems like there's no one that has succeeded just yet.

Doctor_Fegg(3372) 3 days ago [-]

Once you've kicked the car-driving habit, kicking Google Maps is easy...

lallysingh(4092) 3 days ago [-]

OpenStreetMap had solid data, but my periodic attempts at finding a good app haven't been successful.

donaltroddyn(10000) 3 days ago [-]

In terms of browsers, I have moved back to Firefox from Google for my personal/professional browsing, but I have a product that is heavily reliant on Puppeteer and CDP. There are moves towards interoperability in Firefox, but that's my current blocker.

reportgunner(10000) 3 days ago [-]

Try blocking all images on Youtube so you won't see thumbnails. It helps a ton.

IggleSniggle(10000) 3 days ago [-]

Apple Maps is now quite usable if you're on iOS...although it's no Waze.

jstanley(982) 3 days ago [-]

I use https://maps.openrouteservice.org/ and find it to be perfectly adequate.

PretzelFisch(1584) 3 days ago [-]

Is it better to push people to firefox and safari or include Opera and Edge as well so we don't get another defacto standard?

tenacious_tuna(10000) 3 days ago [-]

Part of the issue is that most browsers other than Firefox at this point use the Chromium engine [0]. Using a browser that uses a different engine from Firefox and Chromiums would be most ideal, since that would put even more value on the use of the real standards, and not what the market leader implements.

[0] https://en.wikipedia.org/wiki/Chromium_(web_browser)#Browser...

err4nt(4185) 3 days ago [-]

Blink based users (Chrome, new Edge) are 2/3 users.

WebKit based users (Safari, GNOME Web) are 1/10 users.

Gecko based users (Firefox) are 1/25 users.

Opera and Edge are neglible slivers of Blink's market share. IE and old Edge are dead.

greggman2(10000) 3 days ago [-]

You should definitely not be pushing anyone to Safari. Apple's monopoly on browser engines on iOS gives them veto power on all web standards since iOS uses have no alternate browser engines (like Firefox)

docuru(3792) 3 days ago [-]

Google is masking website content into their content. We need to make a move before its too late

cies(3833) 3 days ago [-]


nytesky(4222) 3 days ago [-]

Is there anyway to do multiple profiles on iOS firefox or any non-chrome browsers? That is one feature I will miss from Chrome on iOS.

And honestly, the profile manager on Firefox is annoying, since you have to start from command line to access and it can't run multiple instances/profiles in parallel.

Analemma_(2878) 3 days ago [-]

The profile manager is annoying, but at least for me, Containers do about 95% of what I used profiles for, and they are much more manageable: you can right-click a link and use 'Open in new container'.

TekMol(2923) 3 days ago [-]

Nice black and white design.

I would remove the light grey bars behind the logos.

Clicking on the images at the bottom should open the respective website instead of the image.

The 'why' page should mention amp. Instead of ranking websites by user friendlyness, via boosting amp they rank them by affiliation with Google. I think that is the most evil thing Google did so far.

missblit(10000) 3 days ago [-]

As long as we're providing website critiques:

* 'Google want to automate us' should be 'Google wants to automate us'

* They should consider making the page about why not to use chrome in particular more discoverable from the main page

* 'Google have been accused of ensuring other Google products don't work on Chrome' should be 'Google have been accused of ensuring other Google products only work on Chrome'

* They block Google search from indexing the page, but they also block all other crawlers. Why not let DuckDuckBot through?

emptybottle(10000) 3 days ago [-]

Safari is my default browser and it works for me, along with DDG.

I may switch to Firefox at some point, but currently there's a long standing bug that breaks audio with multi-channel audio interfaces (FF does not adhere to the mac system output channel settings and instead defaults to channels 1-2)

The only site I need chrome to use is google meet, naturally. Which, if you've turned on automatic closed captioning and seen it perform speech-to-text with user attribution in real time, is horrifying by itself.

mepiethree(10000) 3 days ago [-]

Do Apple/Safari offer great privacy?

RenRav(10000) 3 days ago [-]

If this gains enough traction I can see Google releasing a product as 'Noto' to poison search results.

tiborsaas(3886) 3 days ago [-]

Not really well argued (not at all actually) for a non-techie user who at most does email, youtube, work and shopping...

They are probably 99%.

The site feels like it's authored with a tin-foil hat on.

oehpr(10000) 3 days ago [-]

I agree with this.

While I agree with the rhetoric of this site, this site really is just empty rhetoric.

The 'why' on this site is, in my opinion, a terrible, ineffective, dogmatic argument. I agree with the dogma, but I'm not the one that needs convincing.

saagarjha(10000) 3 days ago [-]

Their browsers website (https://notochrome.org/find-a-new-browser/) makes it somewhat difficult to understand that Firefox is also available as an option on macOS and iOS. Plus it has a link to Midori that looks like it's been taken over by scam.

vxNsr(2963) 3 days ago [-]

The midori thing is weird... the page appears to be unfinished, checking wayback archive shows that the whole project went through a rough patch and is likely on life support if even alive anymore. for a while the homepage was regular wordpress blog feed. sometime at the start of this year they updated it to its current state. They also claim to have merged with the Astian Foundation, but that website also appears unfinished, with lorem ipsum throughout and a gmail email address.

AVTizzle(4073) 3 days ago [-]

They really miss an opportunity to say WHY anywhere on the landing page, instead going straight to: 'Chrome is bad. Here's some alternatives.'

commandlinefan(10000) 3 days ago [-]

There's a 'why' link on the upper right-hand side, but even if you click on it, you'll see they don't really make a compelling case to distrust google. Because they scanned books and digitized them?

FussyZeus(4197) 3 days ago [-]

They don't say Chrome itself is bad, and therefore do not seek to explain it. The page is less a technical break down and more a call to action ala activism. Chrome itself isn't an issue, it's one of many tentacles, and one of the bigger ones, leading back to Google.

tiborsaas(3886) 3 days ago [-]

Even the WHY page is bad. It only states that 'google is bad' 'they are not your friends'. Okay, proof, cases, overwhelming evidence that would force people to stop?

qxnqd(10000) 3 days ago [-]

Who is behind this website?

aviraldg(3667) 3 days ago [-]

> No to Chrome is an alliance led by Berlin based Eduardo Smith and UK based James Mullarkey.

jmstfv(1951) 3 days ago [-]

I have been using Firefox for several years now. Energy management is still an unsolved problem on MacOS (Firefox 71 on 10.14.6). They have been making improvements in the last several releases, but 'Avg Energy Impact' remains around 40 for me (when browsing web pages, higher when streaming video).

I also noticed how different colors are between Chrome and Firefox. That becomes more obvious in the dark mode.


theandrewbailey(2094) 3 days ago [-]

I started using Firefox in 2004, and never stopped using it as a main driver since.

leeoniya(3249) 3 days ago [-]

that bug is closed as dupe of [1] which is marked fixed, though it's not clear to me whether the fix was to implement a flag that enables proper rendering or to do it properly by default.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=667431

mrpopo(10000) 3 days ago [-]

The comments from [email protected] in this bug report are scary.

'We're not doing as the spec tells us to, BUT that probably gives us performance gains, most users don't care about it anyway, I prefer it that way, webdevs should fix it, we should fix the JPEG standard instead (!?)...'

Why so much resistance?

qxnqd(10000) 3 days ago [-]

Try Safari and/or Brave

matty22(10000) 3 days ago [-]

Can anyone recommend a web-based, privacy respecting alternative to Google Docs/Sheets? It's the one part of Google products I haven't found a good replacement for.

kgwxd(2421) 3 days ago [-]

I wish LibreOffice had a calendar with an API, accessible from Calc. The only thing keeping my Google account open is a sheets/calender script that does some financial forecasting based on my calendar entries.

jpkeisala(10000) 3 days ago [-]

I wonder as being privacy focused, why did they not put Brave as recommended browser instead of Firefox?

michaelbrooks(4171) 3 days ago [-]

It seems like they're avoiding any and every Chromium-based browser. I wonder what their reason behind this would be since it's open-source so any browser using Chromium can take Google out of the code.

ailideex(4202) 3 days ago [-]

I'm sure you look you can find something which tells you that Microsoft is also privacy focused in their marketing material. Heck even google says they are focused on privacy:


> We know security and privacy are important to you – and they are important to us, too. We make it a priority to provide strong security and give you confidence that your information is safe and accessible when you need it.

I guess they should just shut down the site now.

mar77i(10000) 3 days ago [-]

You just reminded me of that one argument I've heard about two decades back. That if 'you pirate Windows, you still kind of buy their product'. The rationale of notochrome might be along these lines: If you use chromium, aren't you banking off of technology that is, in this sense, Google's?

But here's the thing, I think you have a point, as far as it's a useful browser technology and open source, I can see about as little a problem as you.

Historical Discussions: Sinkholed (December 04, 2019: 614 points)
Sinkholed – A DNS Horror Story: How I Lost and Regained My .IN Domain Name (December 03, 2019: 4 points)

(614) Sinkholed

614 points 5 days ago by swiftsecurity in 10000th position

susam.in | Estimated reading time – 10 minutes | comments | anchor

By Susam Pal on 03 Dec 2019

On 26 Nov 2019 at 14:55 UTC, I logged into my server that hosts my website to perform a simple maintenance activity. Merely three minutes later, at 14:58 UTC, the domain name susam.in used to host this website was transferred to another registrant without any authorization by me or without any notification sent to me. Since the DNS results for this domain name was cached on my system, I was unaware of this issue at that time. It would take me three days to realize that I had lost control of the domain name I had been using for my website for the last 12 years. This blog post documents when this happened, how this happened, and what it took to regain control of this domain name.

On 29 Nov 2019 at 19:00 UTC, when I visited my website hosted at https://susam.in/, I found that a zero-byte file was being served at this URL. My website was missing. In fact, the domain name resolved to an IPv4 address I was unfamiliar with. It did not resolve to the address of my Linode server anymore.

I checked the WHOIS records for this domain name. To my astonishment, I found that I was no longer the registrant of this domain. An entity named The Verden Public Prosecutor's Office was the new registrant of this domain. The WHOIS records showed that the domain name was transferred to this organization on 26 Nov 2019 at 14:58 UTC, merely three minutes after I had performed my maintenance activity on the same day. Here is a snippet of the WHOIS records that I found:

Domain Name: susam.in
Registry Domain ID: D2514002-IN
Registrar WHOIS Server:
Registrar URL:
Updated Date: 2019-11-26T14:58:00Z
Creation Date: 2007-05-15T07:19:26Z
Registry Expiry Date: 2020-05-15T07:19:26Z
Registrar: NIXI Special Projects
Registrar IANA ID: 700066
Registrar Abuse Contact Email:
Registrar Abuse Contact Phone:
Domain Status: clientTransferProhibited http://www.icann.org/epp#clientTransferProhibited
Domain Status: serverRenewProhibited http://www.icann.org/epp#serverRenewProhibited
Domain Status: serverDeleteProhibited http://www.icann.org/epp#serverDeleteProhibited
Domain Status: serverUpdateProhibited http://www.icann.org/epp#serverUpdateProhibited
Domain Status: serverTransferProhibited http://www.icann.org/epp#serverTransferProhibited
Registry Registrant ID:
Registrant Name:
Registrant Organization: The Verden Public Prosecutor's Office
Registrant Street:
Registrant Street:
Registrant Street:
Registrant City:
Registrant State/Province: Niedersachsen
Name Server: sc-c.sinkhole.shadowserver.org
Name Server: sc-d.sinkhole.shadowserver.org
Name Server: sc-a.sinkhole.shadowserver.org
Name Server: sc-b.sinkhole.shadowserver.org

The ellipsis denotes some records I have omitted for the sake of brevity. There were three things that stood out in these records:

  1. The registrar was changed from eNom, Inc. to NIXI Special Projects.
  2. The registrant was changed from Susam Pal to The Verden Public Prosecutor's Office.
  3. The name servers were changed from Linode's servers to Shadowserver's sinkholes.

On searching more about the new registrant on the web, I realized that it was a German criminal justice body that was involved in the takedown of the Avalanche malware-hosting network. It took a four-year concerted effort by INTERPOL, Europol, the Shadowserver Foundation, Eurojust, the Luneberg Police, and several other international organizations to finally destroy the Avalanche botnet on 30 Nov 2016. In this list of organizations, one name caught my attention immediately: The Shadowserver Foundation. The WHOIS name server records pointed to Shadowserver's sinkholes.

The fact that the domain name was transferred to another organization merely three minutes after I had performed a simple maintenance activity got me worried. Was the domain name hijacked? Did my maintenance activity on the server have anything to do with it? What kind of attack one might have pulled off to hijack the domain name? I checked all the logs and there was no evidence that anyone other than me had logged into the server or executed any commands or code on it. Further, a domain name transfer usually involves email notification and authorization. None of that had happened. It increasingly looked like that the three minute interval between the maintenance activity and the domain name transfer was merely a coincidence.

More questions sprang up as I thought about it. The Avalanche botnet was destroyed in 2016. What has that got to do with the domain name being transferred in 2019? Did my server somehow become part of the Avalanche botnet? My server ran a minimal installation of the latest Debian GNU/Linux system. It was always kept up-to-date to minimize the risk of malware infection or security breach. It hosted a static website composed of static HTML files served with Nginx. I found no evidence of unauthorized access of my server while inspecting the logs. I could not find any malware on the system.

The presence of Shadowserver sinkhole name servers in the WHOIS records was a good clue. Sinkholing of a domain name can be done both malicously as well as constructively. In this case, it looked like the Shadowserver Foundation intended to sinkhole the domain name constructively, so that any malware client trying to connect to my server nefariously would end up connecting to a sinkhole address instead. My domain name was sinkholed! The question now was: Why was it sinkholed?

On 29 Nov 2019 at 19:29 UTC, I submitted a support ticket to Namecheap to report this issue. At 21:05 UTC, I received a response from Namecheap support that they have contacted Enom, their upstream registrar, to discuss the issue. There was no estimate for when a resolution might be available.

At 21:21 UTC, I submitted a domain name transfer complaint to the Internet Corporation for Assigned Names and Numbers (ICANN). I was not expecting any response from ICANN because they do not have any contractual authority on a country code top-level domain (ccTLD).

At 21:23 UTC, I emailed National Internet Exchange of India (NIXI). NIXI is the ccTLD manager for .IN domain and they have authority on it. I found their contact details from the IANA Delegation Record for .IN. Again, I was not expecting a response from NIXI because they do not have any contractual relationship directly with me. They have a contractual relationship with Namecheap, so any communication from them would be received by Namecheap and Namecheap would have to pass that on to me.

At 21:30 UTC, ICANN responded and said that I should contact the ccTLD manager directly. Like I explained in the previous paragraph, I had already done that, so there was nothing more for me to do except wait for Namecheap to provide an update after their investigation. By the way, NIXI never replied to my email.

On 30 Nov 2019 at 07:30 UTC, I shared this issue on Twitter. I was hoping that someone who had been through a similar experience could offer some advice. In fact, soon after I posted the tweet, a kind person named Max from Germany generously offered to help by writing a letter in German addressed to the new registrant which was a German organization. The reason for sinkholing my domain name was still unclear. I hoped that with enough number of retweets someone closer to the source of truth could shed some light on why and how this happened.

At 09:54 UTC, Richard Kirkendall, founder and CEO of Namecheap, responded to my tweet and informed that they were contacting NIXI regarding the issue. This seemed like a good step towards resolution. After all, the domain name was no longer under their upstream registrar named Enom. The domain name was now with NIXI as evident from the WHOIS records.

Several other users tweeted about my issue, added more information about what might have happened, and retweeted my tweet.

On 1 Dec 2019 at 11:48 UTC, Benedict Addis from the Shadowserver Foundation contacted me by email. He said that they had begun looking into this issue as soon as one of the tweets about this issue had referred to their organization. He explained in his email that my domain name was sinkholed accidentally as part of their Avalanche operation. Although it is now three years since the initial takedown of the botnet, they still see over 3.5 million unique IP addresses connecting to their sinkholes everyday. Unfortunately, their operation inadvertently flagged my domain name as one of the domain names to be sinkholed because it matched the pattern of command and control (C2) domain names generated by a malware family named Nymaim, one of the malware families hosted on Avalanche. Although, they had validity checks to avoid sinkholing false-positives, my domain name unfortunately slipped through those checks. Benedict mentioned that he had just raised this issue with NIXI and requested them to return the domain name to me as soon as possible.

On 2 Dec 2019 at 04:00 UTC, when I looked up the WHOIS records for the domain name, I found that it had been returned to me already. At 08:37 UTC, Namecheap support responded to my support ticket to say that they had been informed that NIXI had returned the domain name to its original state. At 09:55 UTC, Juliya Zinovjeva, Domains Product Manager of Namecheap, commented on Twitter and confirmed the same thing.

Despite the successful resolution, it was still quite unsettling that a domain name could be transferred to another registrant and sinkholed for some perceived violation. I thought there would be more checks in place to confirm that a perceived violation was real before a domain name could be transferred. Losing a domain name I have been using actively for 12 years was an unpleasant experience. Losing a domain name accidentally should have been a lot harder than this. Benedict from the Shadowserver Foundation assured me that my domain name would be excluded from future sinkholing for this particular case. However, the possibility that this could happen again due to another unrelated operation by another organization in future is disconcerting.

I also wondered if a domain name under a country code top-level domain (ccTLD) like .in is more susceptible to this kind of sinkholing than a domain name under a generic top-level domain (gTLD) like .com. I asked Benedict if it is worth migrating my website from .in to .com. He replied that in his personal opinion, NIXI runs an excellent, clean registry, and are very responsive in resolving issues when they arise. He also added that domain generation algorithms (DGAs) of malware are equally, and possibly more, problematic for .com domains. He advised against migrating my website.

Thanks to everyone who retweeted my tweet on this issue. Also, thanks to Richard Kirkendall (CEO of Namecheap), Namecheap support team, and Benedict Addis from the Shadowserver Foundation for contacting NIXI to resolve this issue.

All Comments: [-] | anchor

monkeynotes(10000) 5 days ago [-]

Someone less technical would likely have no idea what happened to their domain. An individual relying on their web presence for income could be massively impacted by something like this. There really does not seem to be a clear way for someone to a) know what the problem is, and b) get it resolved quickly.

gowld(10000) 5 days ago [-]

Every domain has a technical contact, it's part of the WHOIS schema. A non-technical website owner hires someone to handle technicalities, just as a non-mechanical car owner hires someone to handle their cars mechanics.

Sure, if you don't pay attention to the care of your domain, it can break in incomprehensible ways, just as if you don't pay attention to the care of your car, it can break in incomprehensible ways.

dylanpyle(3405) 5 days ago [-]

A couple of years ago we lost our domain [1] due to a registrar (that we we