Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

August 03, 2021 12:38

Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: How Dwarf Fortress is built (July 29, 2021: 1127 points)
700k lines of code, 20 years, and one developer: How Dwarf Fortress is built (July 28, 2021: 3 points)

(1128) How Dwarf Fortress is built

1128 points 5 days ago by andreareina in 10000th position

stackoverflow.blog | Estimated reading time – 12 minutes | comments | anchor

Dwarf Fortress is one of those oddball passion projects that's broken into Internet consciousness. It's a free game where you play either an adventurer or a fortress full of dwarves in a randomly generated fantasy world. The simulation runs deep, with new games creating multiple civilizations with histories, mythologies, and artifacts.

It has become notorious, and rightly so. Individual dwarves have emotional states, favorite gems, and grudges. And it all takes place in an ASCII interface that looks imposing to newbies, but feels like the text crawl in The Matrix: craftsdwarf, river, legendary megabeast.

The entire game is product of one developer, Tarn Adams, aka Toady One, who has been working on Dwarf Fortress since 2002. For the first four years it was a part time project, but since 2006 it's been full time. He writes all the code himself, although his brother helps out with design and creates stories based on the game. Up until now, he's relied on donations to keep him going, but he's currently working on a version with pixel graphics and a revamped UI that will be available for purchase on Steam.

I reached out to Tarn Adams to see how he's managed a single, growing codebase over 15+ years, the perils of pathing, and debugging dead cats. Our conversation below has been edited for clarity.

Q: What programming languages and other technologies do you use? Basically, what's your stack? Has that changed over the 15-20 years you've been doing this?

A: DF is some combination of C and C++, not in some kind of standard obeying way, but sort of a mess that's accreted over time. I've been using Microsoft Visual Studio since MSVC 6, though now I'm on some version of Visual Studio Community.

I use OpenGL and SDL to handle the engine matters. We went with those because it was easier to port them to OSX and Linux, though I still wasn't able to do that myself of course. I'm not sure if I'd use something like Unity or Unreal now if I had the choice since I don't know how to use either of them. But handling your own engine is also a real pain, especially now that I'm doing something beyond text graphics. I use FMOD for sound.

All of this has been constant over the course of the project, except that SDL got introduced a few years in so we could do the ports. On the mechanical side of the game, I don't use a lot of outside libraries, but I've occasional picked up some random number gen stuff—I put in a Mersenne Twister a long while ago, and most recently I adopted SplitMix64, which was featured in a talk at the last Roguelike Celebration.

Q: What are the challenges in developing a single project for so long? Do you think this is easier to do by yourself? That is, because you wrote every line, is it easier to maintain and change?

A: It's easy to forget stuff! Searching for ';', which is a loose method but close enough, we're up to 711,000 lines, so it's just not possible to keep it all in my head now. I try to name my variables and objects consistently and memorably, and I leave enough comments around to remind myself of what's going on when I arrive at a spot of code. Sometimes it takes several searches to find the exact thread I'm trying to tug on when I go and revisit some piece of the game I haven't touched for a decade, which happens quite a bit. I'd say most changes are focused only on certain parts of the game, so there is kind of an active molten core that I have a much better working knowledge of. There are a few really crusty bits that I haven't looked at since before the first release in 2006.

Regarding the relative ease of doing things by myself, certainly for me, who has no experience working on a large multi-person project, this is the way to go! People obviously get good at doing it the other way, for example over in the AAA games context, and clearly multiple engineers are needed over there to get things done on time. I'd be hesitant to say I can go in and change stuff faster than they can, necessarily, since I haven't worked in that context before, but it's true that I don't have any team-oriented or bureaucratic hurdles to jump through when I want to make an alteration. I can just go do it. But I also have to do it alone.

Q: What's the biggest refactor/change that you had to make?

A: There have been some refactors that have lasted for months, redoing certain data structures and so forth, though I'm not sure anything is ever a refactor strictly here since there's always opportunities to push the mechanics forward simultaneously and it makes sense to do so when the code knowledge is fresh.

Adding the Z coordinate to make the game mechanically 3D (while still being text) was another one, and really the most mind-numbing thing I've probably ever done. Just weeks and weeks and weeks of taking logic and function calls that relied on X and Y and seeing how a Z fits in there.

Making the item system polymorphic was ultimately a mistake, but that was a big one.

Q: Why was this was a mistake?

A: When you declare a class that's a kind of item, it locks you into that structure much more tightly than if you just have member elements. It's nice to be able to use virtual functions and that kind of thing, but the tradeoffs are just too much. I started using a "tool" item in the hierarchy, which started to get various functionality, and can now support anything from a stepladder to a beehive to a mortar (and pestle, separately, ha ha), and it just feels more flexible, and I wish every crafted item in the game were under that umbrella.

We do a lot of procedural generation, and if we wanted to, say, generate an item that acts partially like one thing and partially like another, it's just way harder to do that when you are locked down in a class hierarchy. Adding things like diamond dependencies and all that just end up tying you in knots when there are cleaner ways to do it. If different components can just be turned off and on, it's easier, and allows you to do more.

I think some game developers refer to this as an entity component system, though it's my understanding that harder-core optimizer people think of that as something else where you're actually breaking things down by individual fields. Using a single object with different allocated subobjects is almost certainly worse for cache misses, which is a whole other thing, but the benefits in organization, flexibility, and extensibility just can't be ignored, and the different subfields of the tool item aren't used so often that it becomes an optimization issue.

Q: Did you run into any issues moving from 32 bit to 64 bit? That feels like one of those things that was huge at the time but has become pretty accepted.

A: Not at all! I'm struggling to think of a single issue. Fortunately for us, we already had our byte sizes under control pretty well, since it comes up saving and loading the worlds; the format needed to be nailed down back when we set that up, especially because we've had to deal with endian stuff between OSes and all that. And we don't do any gnarly pointer operations or other stuff that might have gotten us in trouble. It just ended up being really good code for 64 bit conversion due to our other practices, entirely by accident. The main issue was just getting the time together to make the change, and then it didn't end up taking nearly as long as I thought it would.

Q: I've seen other games similar to DF die on their pathfinding algorithms.What do you use and how do you keep it efficient?

A: Yeah, the base algorithm is only part of it. We use A*, which is fast of course, but it's not good enough by itself. We can't take advantage of some of the innovations on that (e.g. jump point) since our map changes so much. Generally, people have used approaches that add various larger structures on top of the map to cut corners, and because of the changing map, these just take too long to maintain, or are otherwise a hassle. So our approach has been to just keep track of connected components reachable by walking. These are pretty easy to update even when the map changes quickly, though it does involve some flood-filling. For instance, if water cuts the fortress in half, it needs to flood out from one side and update a whole half of the fortress to a new index, but once that's done, it's good, generally. Then that allows us to cut almost all failed A* calls from the game—our agents just need to query component numbers, and if the component numbers are the same, they know the call will succeed.

It's fast to maintain, but the downside is that the component indices are maintained for walking only. This means that flying creatures, for instance, don't have global pathfinding intelligence that's any different from a walker. In combat and a few other situations, we use short-range flood fills with their actual logic to give them some advantages though. But it's not ideal for them.

I'm not sure we'll attempt other structures here to make it work any better. For our map sizes, they've all failed, including some outside attempts. Of course, it might be possible with a really concerted effort, and I've seen other games that have managed, for instance, some rectangular overlays and so forth that seem promising, but I'm not sure how volatile or large their maps were.

The most simple idea would just be something like adding a new index for fliers, but that's a large memory and speed hit, since we'd need to maintain two indices at once, and one is bad enough. More specific overlays can track their pathing properties (and then you path through the overlays instead of the tiles), but they are hard and slow to maintain as the map changes. There are various other ideas floating around, like tracking stairs, or doing some limited path caching, and there are probably some gains to be made there. We are certainly at the edge of what we can currently support in terms of agents and map complexity, so something'll have to give if we want to get more out of it.

Q: On that note, you're simulating a lot of things all at once—how do you manage so many so many actors asynchronously (or do you)?

A: If we're talking about asynchronous as in multithreading, then no, we don't do any of that, aside from the graphical display itself. There's a lot of promise here, even with microthreading, which the community has helped me out with, but I haven't had time to dive into. I don't have any experience and it's a bug-prone thing.

Q: Have you tried other projects/technologies alongside DF?

A: Sure! The side project folder that's migrated between computers for the last ten years or so has about 90 projects in it. Some of them lasted for days, some for multiple years. They are mostly other games, almost always in other genres, but there are also a few DF helper projects, like the myth generator prototype. Nothing close to seeing the light of day, but it's fun to play around.

Q: With your ~90 side projects, have you explored any other programming languages? If so, any favorites?

A: Ha ha, nope! I'm more of a noodler over on the design side, rather than with the tech. I'm sure some things would really speed up the realization of my designs though, so I should probably at least learn some scripting and play around with threading more. People have even been kind enough to supply some libraries and things to help out there, but it's just difficult to block side project time out for tech learning when my side project time is for relaxing.

Q: You have the most interesting release notes. What's your favorite bug and what caused it?

A: It's probably boring for me to say, but I just can't beat the drunken cat bug. There've been a few videos made about it by this point. That was the one where the cats were showing up dead all over the tavern floor, and it turned out they were ingesting spilled alcohol when they cleaned their paws. One number was off in the ingest-while-cleaning code, and it sent them through all the symptoms of alcohol poisoning (which we added when we spruced up venomous creatures.)

If you want to try Dwarf Fortress for yourself, you can download it from their website.

Tags: dwarf fortress, solo developer, video games

All Comments: [-] | anchor

someperson(10000) 4 days ago [-]

What's surprising to me is Dwarf Fortress author uses Visual Studio Community, rather than the paid version of that IDE:

> 'In enterprise organizations (meaning those with >250 PCs or >$1 Million US Dollars in annual revenue), no use is permitted beyond the open source, academic research, and classroom learning environment scenarios described above.'

Surely there has been years where his revenue has exceeded $1 million.

ramshanker(10000) 4 days ago [-]

WinRar 40 day trial, It is.

chaostheory(10000) 5 days ago [-]

To me what was most surprising about Dwarf Fortress, given the complexity, is that Tad didn't use git or any other code repository until more recently.

AstralStorm(10000) 5 days ago [-]

Well, if you deeply understand the code and reasons behind it (or it's superbly documented esp. with tests), the tool does not bring much beyond being a convenient backup or checkpoint system.

And especially if you don't have to work with a team.

reader_mode(10000) 5 days ago [-]

I mean using SCM became the norm in maybe last 10-15 years ? When this was started I don't think using SCM was as ubiquitous as it is today, not to mention on a solo project. If you've been hammering away since then I can see how you might have missed it.

Octopodes(10000) 5 days ago [-]

Am I mistaken, or isn't his given name 'Tarn?'

AnIdiotOnTheNet(10000) 5 days ago [-]

Why would that be surprising? For a single programmer working alone Git is an incredibly complicated tool.

P.S.: A lot of you are confusing 'complicated' with 'difficult'.

dpcx(10000) 5 days ago [-]

Did he finally start using one? I know that was one of the things that blew most people away, is that he had no change history for most of his code...

jandeboevrie(10000) 5 days ago [-]

Df is just like OpenTTD. Both are like Chess, easy to start, fun to play casually but it takes years to master. Great games, complex if you want to and a time sink if you don't keep an eye on it. Have had many hundreds of fun hours in both games

SilverRed(10000) 5 days ago [-]

This seems to be the opposite of what people have described in the past. With the start being incredibly hard as you basically need to follow a wiki page step by step to work it out. But after a bit you can find a method that pretty much makes the game unlosable so you have to start implementing your own restrictions and artificial difficulties.

DizzyDoo(10000) 5 days ago [-]

I've played a fair amount of Dwarf Fortress and I'd never describe it as 'easy to start'? The learning curve is notorious and for most people involves watching a lot of YouTube tutorials and copying actions.

I'm hopeful that the upcoming Kitfox Games version makes it very accessible to lots more people.

ALittleLight(10000) 5 days ago [-]

Dwarf Fortress seems incredibly hard to start to me. The UI is chaotic and the 'graphics' verge on incoherent.

anthk(10000) 5 days ago [-]

Eh, Chess it's much more complex than Slashem to me, and I never ascended in Nethack/Slashem/DCSS.

short_sells_poo(10000) 5 days ago [-]

I agree with the spirit of your post, but Dwarf Fortress and Easy to Start do not fit in the same sentence in my opinion. I mean, the game is legendary for it's arcane user interface and the vast number of things that can go wrong even for experts.

Dwarf Fortress is something like vim, where usually on the first interaction people don't even know how to start the game, let alone do anything in it.

Chess is easy to start, the rules fit on a post it note basically. Dwarf Fortress is difficult to start, and even more difficult to master.

njharman(10000) 5 days ago [-]

I thought it was developed by the two brothers? I've seen talks and interviews by both brothers on DF.

like this one https://www.youtube.com/watch?v=ZMRsScwdPcE

and https://www.youtube.com/watch?v=HtKmLciKO30

psyc(10000) 5 days ago [-]

This is StackOverflow, and the interview is about code, so they probably mean developer in that sense. Tarn is the only programmer.

mtekman(10000) 4 days ago [-]

One of the brothers is the main developer and ideas guy, the other is more the admin/idea control/art role

reidjs(10000) 5 days ago [-]

I read everything about this game I can get my hands on. I don't fully understand why I find dwarf fortress so intriguing. It's such a pure passion project... that actually made it.

dexwiz(10000) 5 days ago [-]

It's the programming equivalent to the people who turn their houses into model train worlds. People dabble in it, or make a few toys of their own, but its rare to commit so hard.

totetsu(10000) 4 days ago [-]

I used to love falling asleep to the DFTalk podcast..

petercooper(10000) 5 days ago [-]

Do you actually play it? I'm a bit the same about reading about it, yet I have never once played it for myself!

milgrim(10000) 5 days ago [-]

I am the same. I also started to play a few times, but not knowing the mechanics and not having enough time/motivation to learn them in detail is frustrating. But there's a nice alternative: https://youtube.com/c/kruggsmash

Watching someone else play Dwarf Fortress can be surprisingly entertaining. Just start one of his series from the start.

setr(10000) 4 days ago [-]

Games are interactive simulations, by nature.

DF is an honest attempt at simulating things thoroughly.

Therefore, DF is an honest attempt at making a thorough game.

Very few games can make such a claim

vtail(10000) 5 days ago [-]

I know I will regret asking it... but what's the modern way of starting playing DF?

andrewzah(10000) 5 days ago [-]

You can use dfhack to run the game, which provides some niceties [0]. There are graphical packs; many people like Phoebus. You can also use an external program like Dwarf Therapist for dwarf management, which becomes necessary once you have a lot of dwarves.

[0]: https://docs.dfhack.org/en/stable/docs/Introduction.html

SilverRed(10000) 5 days ago [-]

Consider playing rimworld. Its the same idea with less depth but a whole lot more approachable.

pradn(10000) 5 days ago [-]

There's an officially-supported skin with good sprite-based graphics. It's still a bit in progress however. https://www.kitfoxgames.com/press/sheet.php?p=dwarf_fortress

legohead(10000) 5 days ago [-]

failing is considered part of the fun of the game [1]. so just download it and start going.

[1] https://dwarffortresswiki.org/index.php/DF2014:Losing

AQuantized(10000) 5 days ago [-]

The Wiki is probably the most up to date resource: https://dwarffortresswiki.org/index.php/DF2014:Quickstart_gu...

It's really not that difficult once you get started, especially if you're used to learning some esoteric keybinds.

sjfkejrnakcijdj(10000) 5 days ago [-]

Not exactly modern, but the Captain Duck Dwarf Fortess Video Tutorial[1] is still a great tutorial.

[1]: https://youtube.com/playlist?list=PL5A3D7682BDD48FC2

Inviz(10000) 5 days ago [-]

Wait for steam version

mabbo(10000) 5 days ago [-]

> What's your favorite bug and what caused it?

> A: It's probably boring for me to say, but I just can't beat the drunken cat bug... That was the one where the cats were showing up dead all over the tavern floor, and it turned out they were ingesting spilled alcohol when they cleaned their paws.

I think that bug explains very well just how deeply complex Dwarf Fortress really is. Drinks can be spilled. Some drinks have alcohol. If cats step in something it sticks to their paws. Cats clean their paws, causing them to ingest what's on them. Enough alcohol will kill a cat. Put together: dead drunken cats.

devenvdev(10000) 5 days ago [-]

Other amusing DF bugs:

Dwarfs trying to clean their inner organs (dwarf wounded, doctor closes the wound, dirt stay inside)

Undying children in the moat water (for years... just swimming there...)

Killer carps (there was a long time during which carps were really overpowered because constant swimming was buffing them up really good, dwarfs getting close to water sources were eaten by carps)

Catplosions (Tarn loves cats, cats reproduce, too many cats kills DF performance)

AndyMcConachie(10000) 5 days ago [-]

One of my favorite Twitter users.


It's just funny Dwarf Fortress bugs.

AnIdiotOnTheNet(10000) 5 days ago [-]

Unfortunately it remains unexplained (by the article) why this is considered a bug. It would be unethical to test, but this seems like perfectly cromulent behavior one might actually see in real life.

markus_zhang(10000) 5 days ago [-]

Not sure what's in the code, but I think each cat (which is some sort of entity by itself) is composed by body parts, which at the lowest level owns a bunch of attributes/components. Would that make sense?

pavel_lishin(10000) 5 days ago [-]

Reminds me a little bit of something strange I saw in Rimworld - all of my dogs were developing liver cirrhosis!

It turns out that my dogs weren't alcoholics - it just happened to be that beer was the only food source they had zoned access to, so they were drinking it out of hungry desperation, and while it gave them enough calories to live on, it also gave them cirrhosis.

Psyonic(10000) 5 days ago [-]

I don't know much about Dwarf Fortress, but why is that a bug? It sounds like it follows reasonably well from the behavior in the game.

7373737373(10000) 5 days ago [-]

I wish there was a description of how this works technically

VectorLock(10000) 4 days ago [-]

Space Station 13 is another game with a lot of serendipitous emergent gameplay but with a slightly less steep (but still insane) learning curve and multiplayer antagonist fun.

Abishek_Muthian(10000) 4 days ago [-]

>Enough alcohol will kill a cat

That's what intrigues me, Not played the game but this must because there were a list of things which could get the cat killed right? Then how come this is an unexpected bug?

ygra(10000) 5 days ago [-]

Noita also had a fun one during early development: deer drowning in their own urine.

And now that I dug up the Reddit AmA thread (https://old.reddit.com/r/Games/comments/d7cqjz/we_are_nolla_...), there's a comment there about the drunken dead cats ...

partomniscient(10000) 5 days ago [-]

10 years ago, there was this article about another unexpected behaviour:


quickthrower2(10000) 4 days ago [-]

Sounds like it's not a bug but working as intended

ericschn(10000) 5 days ago [-]

The beginning of this video https://youtu.be/VAhHkJQ3KgY has Tarn Adams speaking about this bug.

orf(10000) 5 days ago [-]

I tried to play DF. I even got over the ASCII interface.

But honestly Rimworld is better and a bit deeper.

jtms(10000) 5 days ago [-]

Rimworld is phenomenal... I absolutely love it, but I wish it had z-levels and the fluid sim of DF. Nothing is more fun than accidentally flooding your fortress with magma!

dkbrk(10000) 4 days ago [-]

I really can't agree. Rimworld might be more accessible, but it lacks DF's depth of simulation. Even if you ignore details in DF that are generally unimportant (e.g. all the intricate details of a dwarf's personality and the precise genetics of their facial hair), Rimworld's lack of 3D and fluid simulation alone make it vastly simpler.

But what really annoyed me about Rimworld, and why I never really got into it, was how 'gamey' it feels.

One element is, the mechanics of random events, which feel extremely arbitrary. I'm not talking about random animals coming onto the map, that's little different from a legendary beast turning up in DF, but things like a solar event causing all your batteries to blow up for no reason. And random events seem to happen every few minutes. Whereas in DF, pretty much everything that happens, happens for a reason. There's randomness, but it's not completely random. If your fortress is wealthy, it will attract invasions. Forgotten beasts actually exist on the world map and have a history. That sort of thing.

In addition to that, crafting and combat are extremely simplistic compared to DF. In DF, bootstrapping an economy which can actually craft everything you need from scratch is actually fairly difficult and requires significant planning and investment. Many things require multiple stages of processing, and it can be very hard to find the particular raw material you need. In Rimworld, you pick up a few raw materials and can manufacture an assault rifle in a couple of minutes. And that assault rifle doesn't need ammunition, and is hopelessly inaccurate beyond 10m or so, for some reason. Compare that to DF where when a dwarf shoots a crossbow it actually does a 3D simulation of the bolt's trajectory. And in the background it's also simulating things like the bolt's temperature, and when it hits the target, it determines what happens based on the density and strength of the bolt's material and the material of whatever it hit.

rcxdude(10000) 5 days ago [-]

Rimworld is a better game but not really the deeper sim (I feel like it worked on distilling a lot of what made dwarf fortress compelling while making it much more accessible, and part of that is simplifying).

100011_100001(10000) 5 days ago [-]

This is a little bit of a hijack but if I wanted to start coding games as a side project, where would I start?

Which platform? Mobile, PC, console? Any good introductions on the subject of solo game development? I know I can google this, but I trust HN users more than the Google algo.

stevenhuang(10000) 4 days ago [-]

Love2D is an awesome opensource 2D Lua gamedev engine. Super fast for prototyping and great dev UX. I think it also supports mobile now as well.

darkandbrooding(10000) 5 days ago [-]

I recommend Godot ( https://godotengine.org/ ). It solves many of the problems you suggested.

AnIdiotOnTheNet(10000) 5 days ago [-]

I started doing game dev in DOS after reading Tricks of the Game Programming Gurus by Andre LaMothe in the 90s, so keep that in mind for the following advice.

Depends on what you want to do. If you want to code games because you find the programming aspect interesting, start small by writing your own versions of some simple games. I personally wouldn't recommend any frameworks or libraries other than (maybe) SDL. Implement everything in the simplest way you can think of that will actually work and only go back and refactor if you need to, that's the time to look up how other people have solved that problem [0]. Resist the urge to over engineer. I might be biased but I say target PC first. Windows specifically, but Linux isn't much worse as long as you never plan to deploy the thing. This is because these platforms are incredibly open and there is a lot of information and tooling available. After you get a feel for it, and have a good idea of what you want to make next, start incrementally branching out in directions that interest you.

If you have a good idea of the kind of game you want to make and want to start making it with as little friction as possible, then your best bet is to find an engine that is already well suited to that kind of game and learn just the things you need to in order to make it happen. Again, you'll want to start small regardless of what it is you actually want to make, just ensure that you're always moving toward that goal. That is very much not my path, so I have little other advice.

[0] If you look it up first without trying it yourself, you won't have a good understanding of the problem space. You'll end up believing in the commonly accepted answer as dogma and severely limit yourself.

ljp_206(10000) 5 days ago [-]

I'm not much of a game programmer, but lurk /r/gamedev. The common advice is to start small - think toys rather than MMOs. From there, it's said one should follow what they're interested in, and focus on follow-thru, not tacking on features to their dream game that is supposed to compete with Skyrim. Console development is always going to be more trouble than the more open platforms of computers and phones. Myself, I've always thought it'd be pretty fun to make a menu-heavy game with just web technology, which then of course CAN be played anywhere.

There are lots of engines out there that can take care of things for you, or act as full fledged studios, like Game Maker. Some prefer to start from scratch of course. Again, the idea should be follow what you're interested in so that you can actually get something done.

dharmab(10000) 4 days ago [-]

The very first game you should make is a Pong/Breakout/Asteroids/Galaga/Frogger clone. It seems simple, but you need to have a surprising amount of systems: graphics, audio, controls, collisions, game states, menus, scoring, user interface.

The engine and language does not matter for this first game. The only thing that matters is completing one small game.

After that, you'll have the level of knowledge to make somewhat informed choices about project #2, where you can expand and innovate. You can use hobbyist engines like Godot and have more control, or more professional engines like Unity or Unreal if you want access to those tools and asset libraries.

/r/gamedev has a useful article: https://www.reddit.com/r/gamedev/wiki/faq#wiki_getting_start...

bttrfl(10000) 5 days ago [-]

Why don't you start with an idea and self-awareness of your strengths?

You might be good at puzzles. Or stories. Maybe visuals ain't your thing and you can write a text based game - there are great engines for that. Maybe you're a great dev and can start hack your own Dwarf Fortress and keep on doing it for the rest of your life.

Gaming and tech behind is so varied that whoever you are, you'll find something that plays to your skills.

dgan(10000) 5 days ago [-]

I started contributing to an old open source strategy game, I played as kid, couple of months ago

It was extremely pleasant to fix some old bugs that kept annoying people (and me!)

I spend more time in the game guts, than playing though

I believe it's easier to start with existing game, than creating new one from scratch

Octopodes(10000) 5 days ago [-]

Check out this itch page for some suggestions:


Scroll down to the ' Help—I've never created a game before!' section and there are some suggestions like Phaser or Godot.

outworlder(10000) 5 days ago [-]

I would suggest you start by just coding a text-based game. Yeah, you heard me right. Maybe a text adventure or whatever. A D&D game printing out damage as logs. FTL clone, with text descriptions only. Anything. You can create a surprisingly engaging game just with basic standard input and output - even multiplayer games (the MUDs were basically this). Print a basic ASCII map, and now you can do Nethack.

The reason for this is: we all want to make beautiful AAA games. But if you have no clue where to begin, it means that you need to develop your intuition for the game logic first – otherwise, you would probably know what to look for :)

If you start by downloading Unity or similar, now you'll be bogged down trying to learn all its systems (without a clear understanding of _what_ you need to learn and what you can ignore, for now). You'll also be bogged down by the need for assets. Sure, you will have a full blown 3D engine, but it's still incredibly boring when all you have is a bunch of cubes or premade assets, so you are right back to square one. Only with more complexity. A lot more - the more visually complex the game, the more code you will have to write that's only concerned about visuals. Getting a character to, say, swing an axe and make it look right and that it is actually hitting something involves a surprising amount of work. Yeah, you could use existing sample games nowadays, but that isn't really teaching you much.

Then it depends on how much background you have. If you were, say, a front-end developer with any experience, you could use that to add some basic visuals. Think Tetris. You can do a lot with very rudimentary tools, as long as everything is kept simple.

At some point, you might _need_ to display graphics (maybe that's the whole point of your game idea). I say 'might', because Dwarf Fortress, which is the subject of this thread, never really did. In which case you have some more decisions to make. Is it a 2D game? Maybe use something like Pygame, Löve (for Lua), etc.

At some point you'd be looking into Godot, Unity or similar. And guess what, you could take your basic text-based game, and re-use parts of it as the brains for your game.

Don't get into the game-engine building rabbit hole. It's very fun if you are into that, but know you are unlikely to release anything by going that route. Ask me how I know.

Platform: start with whatever machine you use for development. Presumably you know a lot about it, don't start a side-quest :) Specially, avoid consoles for now. Cross-platform development is getting easier than ever - sometimes all you need to do to start running your game on mobile is to click a dropdown. But that simplicity is deceiving, there's lots you'll have to learn about other platforms. Stick with what you know, until you are comfortable.

I'll let others provide reference material, my sources are outdated as I'm past my Gamedev.net days (for the time being).

Audiophilip(10000) 5 days ago [-]

I'd go with Unity, targeting desktop for simplicity. There's an abundance of high quality tutorials (a respectable amount made by Unity) and the learning curve is gentle, imho.

markus_zhang(10000) 5 days ago [-]

I think the first thing to figure out is: do you want to make games or game engines? Or is there a specific part of game development that you are particularly interested in? Back in the year 2000 I was very into Level Design. Back then the FPS genre just took off and I happened to stumble into a 'Worldcraft.exe' in a folder in the CD of Half-Life.

iwintermute(10000) 5 days ago [-]

have you seen HandMadeHero? https://handmadehero.org/

Plus what do you mean by coding games specifically? Is it game engine programming? Game design? Or other related stuff?

mhitza(10000) 5 days ago [-]

I second Godot. You can make cross platform/mobile games with it. The IDE itself is built with the engine which is rather cool.

Fair warning, the syntax is python inspired but not compatible, which most of the time that will trip you up.

My second suggestion, is for your first project to develop a top down 2D (instead of a classical side scroller), using itch.io assets. To get a nice jump feeling for a side scroller 2D game, you straight ahead have to start using a state machine and fiddle a bunch, no hand holding with that.

markus_zhang(10000) 5 days ago [-]

IMO that's one of the best ways a single programmer can spend his career. No weird requirements, no deadlines, no nothing, nada. Just one's passion and a product. Whether it is successful is irrelevant.

Kudos Mr. Adams for making the achievement and moves gaming history.

Going back to the interview, I found this line (and the logic attached) interesting:

>Making the item system polymorphic was ultimately a mistake, but that was a big one.

>When you declare a class that's a kind of item, it locks you into that structure much more tightly than if you just have member elements.

I guess when the game becomes moderately complex then ECS or something similar suddenly makes a lot sense.

wly_cdgr(10000) 5 days ago [-]

Everything about it is wonderful (I really mean that) except the fact that he's sponging off his mother to do it. I can't help but feel that it's a drop of vinegar that spoils the whole cup of milk

Philip-J-Fry(10000) 5 days ago [-]

I have a passion for programming but I need something to drive me. I'm useless when I try and make stuff on my own. But if I've got someone telling me 'I need a system that does X', I just get highly motivated in delivering it. Like I need requirements.

akira2501(10000) 5 days ago [-]

> No weird requirements, no deadlines, no nothing, nada

Well.. until you accidentally create a broken release, that is. That's _real_ sweat.

dmitryminkovsky(10000) 5 days ago [-]

I was with you until:

> Whether it is successful is irrelevant.

At some point something needs to be successful or you can't keep working on it, right?

sdevonoes(10000) 5 days ago [-]

That's why programming is king. Software engineering on the other hand...

hprotagonist(10000) 5 days ago [-]

Making the item system polymorphic was ultimately a mistake, but that was a big one.

Q: Why was this was a mistake?

A: When you declare a class that's a kind of item, it locks you into that structure much more tightly than if you just have member elements. It's nice to be able to use virtual functions and that kind of thing, but the tradeoffs are just too much. I started using a "tool" item in the hierarchy, which started to get various functionality, and can now support anything from a stepladder to a beehive to a mortar (and pestle, separately, ha ha), and it just feels more flexible, and I wish every crafted item in the game were under that umbrella.

We do a lot of procedural generation, and if we wanted to, say, generate an item that acts partially like one thing and partially like another, it's just way harder to do that when you are locked down in a class hierarchy.







I have certainly committed all of these sins, too.

adamrezich(10000) 5 days ago [-]

this is a pretty good series of posts but I was rather surprised at where the series ended, with more (hypothetical) OOP abstraction instead of less. trying to fit game rules into a language's type system is a common thing for novice programmers to attempt, because it seems like the perfect logical application of these type system rules you just learned about when learning the language you're using. I kept expecting the article series to get around to reducing the system down to something like a Player class with a PlayerClass enum and member (and ditto for Weapon), then branching logic based on that, instead of trying to pack it into the type system.

tinus_hn(10000) 5 days ago [-]

In the LPMud LPC language objects could inherit from multiple other objects, so they could combine behaviors. I haven't really seen that in other languages.

truetraveller(10000) 5 days ago [-]

'Tool' is still a class, with perhaps very generic polymorphic methods (e.g. do_default_action() ). The problem is not polymorphism per se, but rather about having a deep class hierarchy aka lasagna code.

My policy: OOP is like salt. Use little and that's great. I only allow a single inheritance layer, and ideally no inheritance at all.

blacktriangle(10000) 5 days ago [-]

Hickey nails this one in his talks. When you make something a class, that's NOT an abstraction, that's a concretion. You make a Tool class, you haven't abstracted what a tool is, you've made a fixed decision about what it is. For games that want this level of complex interaction between components, entity component systems are the way to go.

uDontKnowMe(10000) 4 days ago [-]

Thanks for that great series! It reminded me a lot of this demo for O'Doyle Rules, a Clojure rules engine library, wherein the author demos a dungeon crawler style video game written top to bottom using only the rules engine for logic https://youtu.be/XONRaJJAhpA?t=1577

He also goes on to show a text editor written entirely in the rules engine (which he uses to develop the game in), really cool stuff!

mushishi(10000) 5 days ago [-]

> We no longer have the problem of trying to fit "a wizard can only use a staff or dagger" into the type system of the C# language. We have no reason to believe that the C# type system was designed to have sufficient generality to encode the rules of Dungeons & Dragons, so why are we even trying?

Good point! Skimmed the blog posts, seemed to have useful enumeration of techniques with considerations.

NoOneNew(10000) 5 days ago [-]

Sorry but a lot of the complaints from the article and what you linked are... well weird. Look, I'm no grand ninja guru wizard programmer, but after a decade of programming on and off as a job... wtf are you all smoking? Theres nothing to preach but to check your hubris. A majority of problems stem not from OOP or whatever language being used, it's from over abstracting. This is mostly due to trying to pre-build for a scale that will 99.8% never happen or to account for some wild potential esoteric function in the ether that'll never happen as well. There's some weird dick measuring contest out there on the internet that I wasn't invited to where everyone is trying to out over complicate each other. They never stopped to properly learn any real design patterns, so their classes end up all over the place. 'Its OOP's fault!' And hell, sometimes you used a hammer when a screwdriver was more appropriate. No big deal, we all make mistakes in lines of design logic. It ain't OOP's fault you made an oops.

TickleSteve(10000) 5 days ago [-]

composition over inheritance...

Its the fragility of large inheritance hierarchies. They work well for very rigidly defined real world structures but not so well in most real-world usages.

dkbrk(10000) 4 days ago [-]

The Baader-Meinhof phenomenon has hit me hard. I read an excellent blog post yesterday on this exact subject. Well worth a read, even if Rust isn't your thing:


tralarpa(10000) 5 days ago [-]

That's very interesting, because I had observed exactly the same when I tried to implement a rogue-like in Java some years ago. For example, I had to decide whether there should be different subclasses for spell books, the different weapon types (e.g bows vs swords), drinks, etc. or just one big Item class. Closely related to that, another decision I had to make was whether object or character properties should be implemented as class members or as entries in a hashmap (where the latter is a class member). At the end, I had the feeling that I was implementing my own class/object system on top of Java's one. I guess in other languages, like lisp, this is not really an issue.

IIRC I already wrote that here on HN, but I think implementing a simple rogue-like is an excellent exercise to get familiar with any programming language.

vishnugupta(10000) 5 days ago [-]

Prefer delegation over inheritance was one of the first OOP lessons I learnt. I read it in a C++ blog around 2004, it was explained very well with geometric shapes examples.

The only time I used inheritance was while implementing a job execution framework. It fit the pattern nicely.

ambivalence(10000) 5 days ago [-]

Meta-question to the mods: why would you edit the original title of this entry? Feels somewhat interventionist.

pcthrowaway(10000) 5 days ago [-]

I agree, I thought the full title was better and more relevant.

_peeley(10000) 5 days ago [-]

Dwarf Fortress consumed hundreds of hours of my life in high school, I have so many fond memories of it. Every year or so I come back to it and I'm always surprised that they've managed to add another mechanic or feature that just makes the game feel even more like its own little universe. After enough time in the game there really is a moment like that scene in the Matrix - 'I don't even see the ASCII anymore. All I see is dwarf, plump helmet, magnetite ore.'

That said, I've always wondered if Dwarf Fortress would be a more smooth experience if it had more developers or was just open source (understandable that it's not though since it's basically Tad's passion project). The biggest headache was always the lack of multithreading, since your fortress really starts to chug once you pass maybe 150 dwarves or do anything exciting with fluids. Regardless, it's amazing what one developer with a great idea and an enthusiastic community has been able to do with the game.

anthk(10000) 5 days ago [-]

It happened to me with Nethack/Slashem and interactive fiction games.

Once you get absorted at night by imagining your surroudings upon reading the game actions, the game feels scarier and more 'real' than any current 3D adventure game.

asciimov(10000) 5 days ago [-]

> That said, I've always wondered if Dwarf Fortress would be a more smooth experience if it had more developers

Maybe, but only if it was under the benevolent dictator model.

devenvdev(10000) 5 days ago [-]

> I don't even see the ASCII anymore. All I see is dwarf, plump helmet, magnetite ore

It's even more than that - I can spend ridiculous time in legends mode just reading facts and events going from one personality to another trying to 'feel' the world. It's like from these pieces of trivial information a bigger picture emerges, partially consisting of the facts and partially of random connections my brain made. It's an amazing experience.

fridif(10000) 5 days ago [-]

Passion project? It is the sole source of his income.

andrewzah(10000) 5 days ago [-]

Even if it were open source I doubt there would be enough impetus to implement multithreading.

It would literally be easier to completely make a new game from scratch with async and threading designs taken into account, instead of trying to adapt an existing monolith.

Async and multithreading are complex and introduce many subtle bugs. It's not so easy to just move to that from a single-thread event loop.

the_af(10000) 5 days ago [-]

I love the idea of Dwarf Fortress and I think the internet purest mission is to disseminate works of passion such as this, not to sell me ads instead. That said, I can't get past the ASCII interface -- I'm a huge fan of IF games (which used to be called 'text adventures' in the olden days) and I can deal with spartan UIs, but for real-time strategy/sandbox games, I absolutely need some sort of graphics. Tiles, at least. The same happens to me with Nethack, which fortunately does have graphical tilesets. I'm glad to read Toady One is working on such a UI!

Something I found insightful about TFA was this:

> Q: With your ~90 side projects, have you explored any other programming languages? If so, any favorites?

> A: Ha ha, nope! I'm more of a noodler over on the design side, rather than with the tech. I'm sure some things would really speed up the realization of my designs though, so I should probably at least learn some scripting and play around with threading more. People have even been kind enough to supply some libraries and things to help out there, but it's just difficult to block side project time out for tech learning when my side project time is for relaxing.

This is interesting. I constantly feel the temptation to learn new tools, new languages, new stuff. I get sidetracked by the tech. But the key to successful games seems to be designing them and sticking to the work of making them work no matter the tech or language. If Toady had kept playing with programming languages and frameworks instead of sticking to his actual project -- creating a game -- maybe Dwarf Fortress wouldn't exist, or it wouldn't be as featureful.

JonathanFly(10000) 5 days ago [-]

>I love the idea of Dwarf Fortress and I think the internet purest mission is to disseminate works of passion such as this, not to sell me ads instead. That said, I can't get past the ASCII interface -- I'm a huge fan of IF games (which used to be called 'text adventures' in the olden days) and I can deal with spartan UIs, but for real-time strategy/sandbox games, I absolutely need some sort of graphics.

You can see the new tileset in action here, demoed by Tarn: https://www.youtube.com/watch?v=LlzCrJS1Fho

teataster(10000) 5 days ago [-]

Have you tried playing with tilesets? I feel they make the experience easier on the eyes.

Bayart(10000) 5 days ago [-]

There are tons of tilesets for DF. If you want to get started, just picking one of the 'Lazy noob packs' is the way to go[1].

[1]: https://dwarffortresswiki.org/index.php/Utility:Lazy_Newb_Pa...

flippinburgers(10000) 4 days ago [-]

It is on steam and it will have a new UI with tiling.

ffffwe3rq352y3(10000) 4 days ago [-]

If you like the idea of FW but can't get around the interface try Rimworld! Easily one of my favorite games of all time! Its like DF but with a Firefly theme and actual graphics.

Historical Discussions: Who Owns My Name? (July 30, 2021: 900 points)

(901) Who Owns My Name?

901 points 4 days ago by Tomte in 10000th position

amandamarieknox.medium.com | Estimated reading time – 10 minutes | comments | anchor

Does my name belong to me? Does my face? What about my life? My story? Why is my name used to refer to events I had no hand in? I return to these questions because others continue to profit off my name, face, and story without my consent. Most recently, the film Stillwater.

This new film by director Tom McCarthy, starring Matt Damon, is "loosely based" or "directly inspired by" the "Amanda Knox saga," as Vanity Fair put it in a for-profit article promoting a for-profit film, neither of which I am affiliated with. I want to pause right here on that phrase: "the Amanda Knox saga." What does that refer to? Does it refer to anything I did? No. It refers to the events that resulted from the murder of Meredith Kercher by a burglar named Rudy Guede. It refers to the shoddy police work, prosecutorial tunnel vision, and refusal to admit their mistakes that led the Italian authorities to wrongfully convict me, twice.

In those four years of wrongful imprisonment and 8 years of trial, I had near-zero agency. Everyone else in that "saga" had more influence over the course of events than I did. The erroneous focus on me by the Italian authorities led to an erroneous focus on me by the press, which shaped how I was presented to the world. In prison, I had no control over my public image, no voice in my story.

This focus on me led many to complain that Meredith had been forgotten. But of course, who did they blame for that? Not the Italian authorities. Not the press. Me! Somehow it was my fault that the police and media focused on me at Meredith's expense. The result of this is that 15 years later, my name is the name associated with this tragic series of events, of which I had zero impact on. Meredith's name is often left out, as is Rudy Guede's. When he was released from prison recently, this was the NY Post headline.

In the wake of #metoo, more people are coming to understand how power dynamics shape a story. Who had the power in the relationship between Bill Clinton and Monica Lewinsky? The president or the intern? It matters what you call a thing. Calling that event the "Lewinsky Scandal" fails to acknowledge the vast power differential, and I'm glad that more people are now referring to it as "the Clinton Affair" which names it after the person with the most agency in that series of events.

I would love nothing more than for people to refer to the events in Perugia as "The murder of Meredith Kercher by Rudy Guede," which would place me as the peripheral figure I should have been, the innocent roommate. But I know that my wrongful conviction, and subsequent trials, became the story that people obsessed over. I know they're going to call it the "Amanda Knox saga" into the future. That being the case, I have a few small requests:

Don't blame me for the fact that others put the focus on me instead of Meredith. And when you refer to these events, understand that how you talk about it affects the people involved: Meredith's family, my family, Raffaele Sollecito, and me.

Don't do what Pete Hammond did when reviewing Stillwater for Deadline, referring to me as a convicted murderer while conveniently leaving out my acquittal. I asked him to correct it. No response.

And if you must refer to the "Amanda Knox saga," maybe don't call it, as the The New York Times did in profiling Matt Damon, "the sordid Amanda Knox saga." Sordid: morally vile. Not a great adjective to have placed next to your name. Repeat something often enough, and people believe it.

Now, Stillwater is by no means the first thing to rip off my story without my consent at the expense of my reputation. There was of course the terrible Lifetime movie that I sued them over, resulting in them cutting a dream sequence where I was depicted as murdering Meredith.

A few years ago, there was the Fox series Proven Innocent, which was developed and marketed as "What if Amanda Knox became a lawyer?" The first I heard from the show's makers was when they had the audacity to ask me to help them promote it on the eve of its premiere.

Malcolm Gladwell's last book, Talking to Strangers, features a whole chapter analyzing my case. He reached out on the eve of publication to ask if he could use excerpts of my audiobook in his audiobook. He didn't think to ask for an interview before forming his conclusions about me. To his credit, Gladwell responded to my critiques over email, and was gracious enough to join me on my podcast, Labyrinths. I extend the same invitation to Tom McCarthy and Matt Damon, who I hope hear what I'm about to say about Stillwater.

Stillwater was "directly inspired by the Amanda Knox saga." Director Tom McCarthy tells Vanity Fair, "he couldn't help but imagine how it would feel to be in Knox's shoes." ...but that didn't inspire him to ask me how it felt to be in my shoes. He became interested in the family dynamics of the "Amanda Knox saga." "Who are the people that are visiting [her], and what are those relationships? Like, what's the story around the story?" My family and I have a lot to say about that, and would have told McCarthy...if he'd ever reached out.

"We decided, 'Hey, let's leave the Amanda Knox case behind,'" McCarthy tells Vanity Fair. "But let me take this piece of the story — an American woman studying abroad involved in some kind of sensational crime and she ends up in jail — and fictionalize everything around it." Let me stop you right there. That story, my story, is not about an American woman studying abroad "involved in some kind of sensational crime." It's about an American woman NOT involved in a sensational crime, and yet wrongfully convicted.

And if you're going to "leave the Amanda Knox case behind," and "fictionalize everything around it," maybe don't use my name to promote it. You're not leaving the Amanda Knox case behind very well if every single review mentions me. You're not leaving the Amanda Knox case behind when my face appears on profiles and articles about the film.

But, all this I mostly forgive. I get it. There's money to be made, and you have no obligation to approach me. What I'm more bothered by is how this film, "directly inspired by the Amanda Knox saga, "fictionalizes" me and this story.

I was accused of being involved in a death orgy, a sex-game gone wrong, when I was nothing but platonic friends with Meredith. But the fictionalized me in Stillwater does have a sexual relationship with her murdered roommate.

In the film, the character based on me gives a tip to her father to help find the man who really killed her friend. Matt Damon tracks him down. This fictionalizing erases the corruption and ineptitude of the authorities.

What's crazier is that, in reality, the authorities already had the killer in custody. He was convicted before my trial even began. They didn't need to find him. And even so, they pressed on in persecuting me, because they didn't want to admit they had been wrong.

McCarthy told Vanity Fair that "Stillwater's ending was inspired not by the outcome of Knox's case, but by the demands of the script he and his collaborators had created." Cool, so I wonder, is the character based on me actually innocent?

Turns out, she asked the killer to help her get rid of her roommate. She didn't mean for him to kill her, but her request indirectly led to the murder. How do you think that impacts my reputation?

I continue to be accused of "knowing something I'm not revealing," of "having been involved somehow, even if I didn't plunge the knife." So Tom McCarthy's fictionalized version of me is just the tabloid conspiracy guilter version of me.

By fictionalizing away my innocence, my total lack of involvement, by erasing the role of the authorities in my wrongful conviction, McCarthy reinforces an image of me as a guilty and untrustworthy person. And with Matt Damon's star power, both are sure to profit handsomely off of this fictionalization of "the Amanda Knox saga" that is sure to leave plenty of viewers wondering, "Maybe the real-life Amanda was involved somehow."

Which brings me to my screenplay idea! It's directly inspired by the life of Matt Damon. He's an actor, celebrity, etc. Except I'm going to fictionalize everything around it, and the Damon-like character in my film is involved in a murder. He didn't plunge the knife per se, but he's definitely at fault somehow. His name is Damien Matthews, and he starred in the Jackson Burne spy films. He works with Tim McClatchy, who's a Harvey Weinstein type. It's loosely based on reality. Shouldn't bother Matt or Tom, right?

I joke, but of course, I understand that Tom McCarthy and Matt Damon have no moral obligation to consult me when profiting by telling a story that distorts my reputation in negative ways. And I reiterate my offer to interview them on Labyrinths. I bet we could have a fascinating conversation about identity, and public perception, and who should get to exploit a name, face, and story that has entered the public imagination.

I never asked to become a public person. The Italian authorities and global media made that choice for me. And when I was acquitted and freed, the media and the public wouldn't allow me to become a private citizen again. I went back to school and fellow students photographed me surreptitiously, people who lived in my apartment building invented stories for the tabloids, I worked a minimum wage job at a used bookstore, only to be confronted by stalkers at the counter. I was hounded by paparazzi, my story and trauma was (and is) endlessly recycled for entertainment, and in the process, I've been accused of shifting attention away from the memory of Meredith Kercher, of being a media whore.

I have not been allowed to return to the relative anonymity I had before Perugia. My only option is to sit idly by while others continue to distort my character, or fight to restore my good reputation that was wrongfully destroyed.

It's an uphill battle. I probably won't succeed. But I've been here before. I know what it's like facing impossible odds.

All Comments: [-] | anchor

seriousquestion(10000) 4 days ago [-]

This is one of those cases where the news cycle and court of public opinion spiraled out of control, never to be corrected. Imagine how surreal and frustrating that must be? And she makes a good point about how these things get named:

Who had the power in the relationship between Bill Clinton and Monica Lewinsky? The president or the intern? It matters what you call a thing. Calling that event the "Lewinsky Scandal" fails to acknowledge the vast power differential, and I'm glad that more people are now referring to it as "the Clinton Affair" which names it after the person with the most agency in that series of events.

prepend(10000) 4 days ago [-]

I think saying "the Clinton Affair" is not specific enough so the press calling it "Lewinsky Scandal" is more understandable and has nothing to do with agency, I think.

There are multiple Clinton affairs and multiple scandals so any headline using those terms wouldn't make sense. "Clinton/Lewinsky Scandal" would make more sense and be clear.

Alex3917(10000) 4 days ago [-]

It's named after the one who told other people about the relationship. That's about as fair as you can get.

iratewizard(10000) 4 days ago [-]

I think the big difference with the Lewinsky scandal is two things: everyone knows who Clinton is, so 'the Clinton Affair' is not as precise; and Monica Lewinsky was not an innocent victim.

golemotron(10000) 4 days ago [-]

This falls into the same category as France asserting control over the use of the word 'Champagne'.

People use words. People talk about events and other people. It's part of being alive.

kube-system(10000) 4 days ago [-]

Not at all the same. Food labelling laws are a consumer protection.

helsinkiandrew(10000) 4 days ago [-]

The reason she's mentioned in US media now and then rather than the victim and perpetrator is because she's American and the public will be interested. In the U.K. newspapers talk about Meredith Kercher.

That doesn't make it right or excuse the dreadful treatment though. I can't see how using her name to promote the film isn't slanderous/libellous.

tutmeister(10000) 4 days ago [-]

> In the U.K. newspapers talk about Meredith Kercher.

Maybe sometimes, but Knox's name makes up the vast majority of headlines. Here are just three major UK publications, and see how little Kercher is mentioned in her own search results...

The Independent: https://www.independent.co.uk/topic/meredith-kercher

The Guardian: https://www.theguardian.com/world/meredithkercher

The Metro: https://metro.co.uk/tag/meredith-kercher/

That must be very hard for Amanda to deal with, her article was surprisingly calm despite the many reasons to be angry about this continued media hype.

lvs(10000) 4 days ago [-]

No, the reason was that she is very attractive. That sells advertising. It's as simple as that. Many of the vile things we complain about in media all come down to the business model.

pajko(10000) 4 days ago [-]

From law's viewpoint, you don't own your face and your fingerprints. The police can force you to unlock your devices via face or fingerprint recognition, or can do it itself by force against your will, but can't force you to unlock via passcode (which is in your head).

SavantIdiot(10000) 4 days ago [-]

Not quite, it's a little murkier:


(But since that Israeli company is selling iPhone hacking kits, locks probably don't matter anymore.)

compiler-guy(10000) 4 days ago [-]

Not that simple. From the law's viewpoint you own your car or house, but there are legal ways of compelling you to do certain things with it, or even surrendering it.

gpas(10000) 4 days ago [-]

I'm italian and I remember very well the shitshow that followed the tragedy. When a case has no clear ending, and the press has already issued its weekly verdicts, noone comes out fully innocent and in that moment justice has failed.

I'm surprised to read her name for the second time in two weeks, now even here on hn.

Just a week ago, out of the blue, even if people had rightfully forgotten her...


She has all the rights to tweet everything she wants, but that's not the best way to go under the radar.

I'm convinced that, when there's the will, the world is big enough for anyone to disappear and get a new life.

aaron695(10000) 4 days ago [-]

It seems no one will state the truth, she's not neurotypical.

As such her behaviour, is complicated. Which is how all this happened.

It's unfair you're downvoted because without saying this, for her to joke about the death of her flatmate on Twitter is as immoral as what she is accusing others of.

PestoDiRucola(10000) 3 days ago [-]

> but that's not the best way to go under the radar

Assuming she wants to. She seems to want to try and profit from this as much as she can.

silviot(10000) 4 days ago [-]

I'm confused: are you taking into account the existence of this movie? https://en.wikipedia.org/wiki/Stillwater_(film)

It's one of the points of the article. You really, really should take that into account before saying something like

> I'm convinced that, when there's the will, the world is big enough for anyone to disappear and get a new life.

and blame her tweets for the attention she gets.

teddyh(10000) 4 days ago [-]

The answer to the literal question is simple. Other people use your name, not you, so the name belongs to other people. Render unto Caesar, etc.

Your name refers to, not your identity (whatever that is), but the idea of you in the heads of other people.

pjc50(10000) 4 days ago [-]

What is Elton John's name and who owns it?

ectospheno(10000) 4 days ago [-]

I feel like your answer not only ignores the entire premise of the article but also throws fuel on the power differential fire discussed within.

olah_1(10000) 4 days ago [-]

This reminds me of identity in Secure-Scuttlebutt. It's my favorite naming system.

Basically you're just an unpronounceable identity. Other people give you names. Different people call you different things.

After all, when you're born, you're just given a name.

cblconfederate(10000) 4 days ago [-]

By that logic trademarks should not exist

OskarS(10000) 4 days ago [-]

It's a really powerful article, and it's hard to argue with any of it. What a nightmare it must be to have what happened to Amanda Knox happen to you. A totally innocent person, who was not only imprisoned for years for a crime she had nothing to do with, but also had her name dragged in the mud by the global press for years. To such an extent that most casual observers still think she had something to do with the crime.

It's clear that the filmmakers have no legal obligation to Knox (and she acknowledges as much in the article), but I think it is equally clear that they have a moral obligation to not slander her using a thinly veiled fictional character.

It's a shame too, because the real "Amanda Knox saga" would make for a much more interesting movie: what is it like to have your roomate murdered, your life destroyed, and your identity robbed from you by the global tabloid press? That's the real Amanda Knox story.

kevinmchugh(10000) 4 days ago [-]

Jimmy Dell : I think you'll find that if what you've done for them is as valuable as you say it is, if they are indebted to you morally but not legally, my experience is they will give you nothing, and they will begin to act cruelly toward you.

Joe Ross : Why?

Jimmy Dell : To suppress their guilt.

- The Spanish Prisoner

Cederfjard(10000) 4 days ago [-]

> It's a shame too, because the real "Amanda Knox saga" would make for a much more interesting movie: what is it like to have your roomate murdered, your life destroyed, and your identity robbed from you by the global tabloid press? That's the real Amanda Knox story.

What are the themes of this new movie, then, if not those?

Taylor_OD(10000) 4 days ago [-]

There is a quite interesting Amanda Knox documentary that she was very much involved in making on Netflix. If you want the, 'saga' its worth a watch.

contravariant(10000) 4 days ago [-]

> the real "Amanda Knox saga" would make for a much more interesting movie: what is it like to have your roomate murdered, your life destroyed, and your identity robbed from you by the global tabloid press? That's the real Amanda Knox story.

I might be confused but isn't that the exact plot of the movie she is referencing?

unyttigfjelltol(10000) 4 days ago [-]

>It's clear that the filmmakers have no legal obligation to Knox (and she acknowledges as much in the article)

I think you and she are being generous. Amanda 'jokingly' floats the idea of defaming and slandering Matt Damon under the guise of fictionalization. She makes an excellent point-- Matt wouldn't put up with that. The only plausible distinction he can make is that his movie is not a gross distortion of the moral character of a living person, which seems like the sort of thing courts can and do sort between litigants who cannot agree.

stefantalpalaru(10000) 3 days ago [-]

> the real "Amanda Knox saga" would make for a much more interesting movie

It would: http://themurderofmeredithkercher.com/The_Evidence

mmarq(10000) 4 days ago [-]

While Amanda Knox is not guilty of Meredith Kercher's murder, she is guilty of accusing a random guy of being the murderer. I think she was sentenced to 2-3 years for this false accusation.

stjohnswarts(10000) 4 days ago [-]

They really would have done her service if they would have just left her name out entirely and just said 'the movie stands on its own merit'. They could have handled this so much better if they would have just talked to her from the start rather than near completion/release of the movie. Particularly in promoting it.

f38zf5vdt(10000) 4 days ago [-]

'Capitalism' and 'moral obligation' are probably mutually exclusive, e.g. Milton Friedman's 'The Social Responsibility of Business is to Increase its Profits'. [1] Friedman would argue that the movie maker has a social responsibility to its employees and shareholders, and that bending truth to meet these obligations would be its moral imperative.


edit: To downvoters, I'm not agreeing with the perspective, just saying that there is one.

duxup(10000) 4 days ago [-]

I had similar questions when it came to the film Sully. https://en.wikipedia.org/wiki/Sully_(film)

In that film they portray the NTSB investigators as trying to paint the pilot in a bad light during the investigation. According to the folks involved in the investigation, including the pilot, this never happened.

These are real people, they don't have the reach or voice of a movie, what happens when someone decides to portray them unfairly?

Legally I don't think there's anything to be done, there would be too many bizarre second order effects if you simply couldn't portray someone without their permission. At the same time it seems morally questionable to not involve them, specifically if their voice is so much smaller than the medium.

georgeecollins(10000) 4 days ago [-]

There is a Richard III society, dedicated to rehabilitating the king after his unfortunate portrayal by Shakespeare.

ghaff(10000) 4 days ago [-]

Pretty much every film based on real people and events takes liberties in service of narrative flow and drama. Sometimes one or more of the original parties are involved. Often they're not.

dylan604(10000) 4 days ago [-]

>this never happened.

This is just Hollywood trying to add drama for the sake of making the story more 'interesting'. Little thought is given to potential collateral damage to truth. All protected under the umbrella comment 'based on true events'.

kortilla(10000) 4 days ago [-]

At least that's just entertainment. Journalism frequently mischaracterizes people and events just to paint a narrative.

sharikous(10000) 4 days ago [-]

Oh please, she was an accused murderer. There was no sure proof she did it, nor that she was innocent, so she was cleared. Notably she lied in court several times.

She benefitted from the media attention to make money for herself and she is now publishing an essay she was not asked to do in which she tries to make many moral 'who is the victim' points (of course claiming she is the victim).

You could argue she should have been granted anonymity but I cannot see her as a helpless victim.

dkersten(10000) 4 days ago [-]

> There was no sure proof she did it

Exactly. And yet she was treated as one by the media and justice system. She even spent time in prison for it. Even though there was no proof she did it.

> nor that she was innocent

There's no proof that you're innocent either, maybe you did it and should spend a few years in prison until later acquitted? Absence of proof of innocence does NOT imply guilt.

> of course claiming she is the victim

She was a victim. Not the same as the murder victim, but she got her reputation ruined, had her family go into debt trying to pay legal fees and SPENT YEARS IN PRISON. For something that there was no evidence she did, was evidence someone else did (who by the way got less time than she got, before she was acquitted), and for which she was eventually found innocent of and acquitted for.

Maybe develop some empathy.

danso(10000) 4 days ago [-]

> Oh please, she was an accused murderer. There was no sure proof she did it, nor that she was innocent, so she was cleared

From the Economist:


> The Court of Cassation in Rome found Ms Knox and Mr Sollecito not guilty on the grounds that they had "not committed the act". Italian law recognises different levels of acquittal; this is the most categorical.

robertlagrant(10000) 3 days ago [-]

She's not claiming she's the victim. She was a victim of a miscarriage of justice, and continues to have the story retold as though she was guilty. You're trying pretty hard to misinterpret this fairly straightforward situation.

Twixes(10000) 4 days ago [-]

She was accused of murder, but in the end explicitly cleared by the Supreme Court as innocent.

duxup(10000) 4 days ago [-]

>she is now publishing an essay she was not asked to do in which she tries to make many moral 'who is the victim' points (of course claiming she is the victim)

So? Is that a bad thing?

jeezzbo(10000) 4 days ago [-]

A recursive signatory deriving root ID Rights + data use under control of people directly fixes this.. a process that repeats is a requirement to prevent second-class process from deriving system outcomes no longer produced 'of, by, for' people, Individuals All, as root dependency of accurate governance in a civil Society. Recursive Signatory: https://www.moxytongue.com/2021/07/recursive-signatory.html

ekster(10000) 4 days ago [-]

This reads like some kind of Sovereign Citizen / Time Cube babble.

Wowfunhappy(10000) 4 days ago [-]

I wasn't familiar with this movie until now. How much does official marketing material mention Amanda Knox? From a quick look up, she doesn't appear to be in the official synopsis on e.g. Rotten Tomatoes.

Because, hypothetical thought experiment here:


Let's say Tom McCarthy had the idea for a screenplay while watching the Amanda Knox trial. He doesn't know, at that point, whether she is guilty or innocent, and it really doesn't matter—the case is just inspiration for a fictional story, which can play out however the author wishes.

So he makes that movie, and it's in production, and one day in an interview a reporter asks 'Where did you get the idea for this movie?' Maybe they even ask 'This story seems kind of similar to the Amanda Knox case, was that an inspiration?'

At this point, does Tom McCarthy need to lie, or decline to answer? Should he not be allowed to share his creative process with the world?

Or, perhaps he just never should have made the movie in the first place... but that seems wrong, doesn't it? Creatives get inspiration from all sorts of odd places, and I don't want to limit them!


Again, I have no idea if the real situation played out like this at all, and in fact, I'm just going to guess that it was far more egregious. But it's what I was thinking about when reading this piece.

aaron695(10000) 4 days ago [-]

Matt Damon kinda panics when asked (3:00 mark) - https://twitter.com/TODAYshow/status/1392101154650271749

So there is a lot in what you are saying.

They have not shut the idea down though, because they know it means a lot of money. Just like the people asking know it means money for their ratings.

Amanda Knox has made them a lot of money with this article which has blown it up more.

She has also gotten her blog out and now I'm listening to her podcast. It's her only way to profit on all this.

The movie maker is not in wrong. The movie is about a violent Dad as far as I can see. It's the media who's been here the whole time.

dml2135(10000) 4 days ago [-]

> Or, perhaps he just never should have made the movie in the first place... but that seems wrong, doesn't it? Creatives get inspiration from all sorts of odd places, and I don't want to limit them!

I thing the point of this article is no, that's not wrong. The story was clearly based on Amanda Knox, and making this movie perpetuates the harm that has been done to her. This isn't an exercise of creative freedom, it's an exploitive cash grab.

dkersten(10000) 4 days ago [-]

The article explains this, that Tom McCarthy mentioned her in a Vanity Fair promotional interview. Nobody said they put it on posters or anything.

kortilla(10000) 4 days ago [-]

> The New York Times did in profiling Matt Damon, "the sordid Amanda Knox saga." Sordid: morally vile. Not a great adjective to have placed next to your name. Repeat something often enough, and people believe it.

This is pretty stupid. When you read, "sordid Jewish genocide" do you think "sordid" is describing "Jewish"?

There is quite a bit of bullshit in the article about power imbalances that isn't coherent either. Both her and the murder victim had no power yet she is fine with putting the murder victim's name up for an alternate headline.

There may have been a point in there somewhere, but it got buried by a cheap attempt to ride the #metoo zeitgeist.

PavleMiha(10000) 4 days ago [-]

I don't think anyone has, or would ever, use the expression you created, but the article describes why this bothers her: 'Not a great adjective to have placed next to your name. Repeat something often enough, and people believe it.'

I for one wouldn't like something to be described as 'the sordid <my name> saga' if I was innocent.

rideontime(10000) 4 days ago [-]

> This is pretty stupid. When you read, "sordid Jewish genocide" do you think "sordid" is describing "Jewish"?

Of course not, it should read 'sordid Israeli genocide.' Or are you saying it should be 'sordid Palestinian genocide'?

kortilla(10000) 4 days ago [-]

Note that she is mad she is not making money from the movie, not that her name is associated with the event: https://twitter.com/amandaknox/status/1418628570453200897

mkl(10000) 4 days ago [-]

I see no indication of what you're claiming in that tweet or her replies to it.

dr_kiszonka(10000) 3 days ago [-]

Even if it were true, should the producers throw some money her way? It sounds like the right thing to do, doesn't it?

renewiltord(10000) 4 days ago [-]

Made me go look up the tale. I recall part of why the whole thing looked weird was that she said that the owner of the bar where she worked was there when the body was discovered.

That dude, Patrick Lumumba, lost his bar and eventually moved to Poland. He was unhappy about the whole thing since he was only her employer.

She got 3 years for slandering him but says she was pressured into it.

Bloody hell. That's a warning to not talk during an investigation. Looks like they were going to pin something on her.

himinlomax(10000) 4 days ago [-]

> That's a warning to not talk during an investigation

That's assuming the authorities respect your right to do so. It's not even that clear cut a right in England for example, in that there are cases where not talking to the police can be held against you. Also while the right to an attorney and against self-incrimination is enshrined in the European declaration of Human Rights and enforced by the ECHR, it took dozens of that court's decisions for France to start implementing it in earnest. I don't know about the situation in Italy in that respect but their justice system is usually a fucking mess, like their bridges.

beeboop(10000) 3 days ago [-]

i will never stop recommending this video. i've watched it a dozen times, at least: https://www.youtube.com/watch?v=d-7o9xYp7eE&t=1s

rootusrootus(10000) 4 days ago [-]

It's interesting to me how the truth got lost, and how uninterested people are in the aftermath. The real killer ended up with a sentence almost half the length of what Amanda Knox got. And he is already out of prison. Italy's justice system is very different from the US's, for better or worse.

soheil(10000) 4 days ago [-]

> Italy's justice system is very different from the US's, for better or worse.

You clearly mean worse based on your previous sentence. Mind shedding some light on those 'differences' for the uninitiated?

bonzini(10000) 4 days ago [-]

The sentence that Knox got was zero, how is it half of 16 years? You can't compare a definitive sentence with one that was overturned.

(Also the real killer got a 1/3rd automatic reduction by accepting summary judgment instead of a full trial).

throwamanda(10000) 4 days ago [-]

This instantly reminded me of OJ Simpson's trials and acquittal. What makes this so believable, at least here on HN? Because she's a young beautiful woman and not a scary black man? I hope people here , which is one of the more rational communities out there, would stop applying double standards. Maybe she is an eloquent writer, but why should rhetoric (or looks) be the determining factor when it comes to public empathy? #metoo and #blacklivesmatter both happened but I'm yet to see black people being judged less harshly and trusted by the public.

kortilla(10000) 4 days ago [-]

Perhaps because she didn't lose a civil case on the same manner and they actually caught the real killer? OJ probably would have gotten more sympathy if there was some kind of viable alternative story.

Atreiden(10000) 4 days ago [-]

Is this comment in good faith, or are you needlessly playing Devil's Advocate?

There are very few similarities in their cases other than the fact that they were both tried for Murder.

- OJ was never convicted, he was acquitted outright. Amanda Knox was convicted. And only acquitted after appeals 4 years later.

- OJ did not get charged in a foreign country, in which local police and courts failed to provide due process. In fact, he received arguably the best legal defense in the country.

- OJ released a book afterwards - 'if I DID IT: Confessions of the Killer' describing the murder in great detail. I mean, you've seen the book cover, right? https://upload.wikimedia.org/wikipedia/en/thumb/4/4f/If_I_di...

I'm really stumped by this comment. OJ got off, and essentially bragged about it. How is there any similarity here?

> why should rhetoric (or looks) be the determining factor when it comes to public empathy? #metoo and #blacklivesmatter both happened but I'm yet to see black people being judged less harshly and trusted by the public.

I think you're massively conflating these two topics. There's a real discussion to be had on race dynamics and conflict in relation to public sentiment, but it's a real stretch to say that's at play here unless you have a better example than OJ.

ineptech(10000) 4 days ago [-]

Fascinating article. I thought she was exaggerating when she complained about people accusing her of being a media whore, until I scrolled down to the [flagged] [dead] and saw someone doing just that.

But I don't think she has a choice. In the past, it would've been possible for someone in her shoes to choose obscurity. In the past, the real Amanda Knox and the idea of Amanda Knox that exists in the collective unconscious of the media and media consumers would've drifted apart. Now, it's hard to imagine how that could happen. Even if she invented a new identity and moved to a small town in Alaska (which would be a new kind of prison sentence in some ways), it'd be newsworthy.

danso(10000) 4 days ago [-]

> I thought she was exaggerating when she complained about people accusing her of being a media whore

Not to pick on you, but why would you reflexively think her to be exaggerating about being called a media whore? People do that with literally everyone whose public complaints become a news story. And what motive would Amanda Knox have in particular? She was labeled an actual whore for the many years when the case was still in active prosecution.

sombremesa(10000) 4 days ago [-]

Moving from a Western to an Eastern country or vice versa (and using a pseudonym on top) is actually a pretty good way to become near-anonymous, unless you're a Michael Jackson level of celebrity.

Of course, not everyone has the background to be able to smoothly pull that off.

ludocode(10000) 4 days ago [-]

> But I don't think she has a choice.

Some countries recognize a 'right to be forgotten'. This is a good example of where such laws can help. She doesn't currently have a choice because many Western countries don't recognize this right. This is something that could change.

ghaff(10000) 4 days ago [-]

Arguably, at some point in the past, this would have been an obscure story and she'd still be in an Italian jail cell.

I sort of disagree though that someone like her couldn't drop out of the public eye if she wanted to. There are tradeoffs to be sure but it seems pretty possible.

podric(10000) 4 days ago [-]

Should there be a way to trademark your own name in order to prevent its misuse in creative works? That way, film studios should have to pay a licensing fee to have your name anywhere in the film or its marketing material.

It's strange to think that fictional characters often have more protection and control over the use of their name than real people.

stjohnswarts(10000) 4 days ago [-]

This can't happen. An option like that would conflict directly with freedom of speech and freedom of the press. As much as I hate what happens to people like her, there are far more cases like Donald Trump or Vladimir Putin that need to be put under the spotlight. Don't get me wrong, I hate what happens to people like here that get caught in the crosshairs. This director and company making the movie should be ashamed of themselves for either not dissociating themselves from her story or at the very least working with her and reaching some understanding where they have respect for her story. A lot of people know she was innocent but I would say the majority don't and only followed the story in the beginning.

Frost1x(10000) 4 days ago [-]

We probably just need to fix loopholes in slander and libel laws and give common people more power to enforce them without making it a huge financial risk. People realized many many years ago how people's names come be abused and drug through the mud and created a recourse for it.

Changing someone's name just a bit or creating fictional characters you can copyright that everyone knows is a substitution or can find the substitution if they're interested in linking the fictional depiction to the real depiction is just a loophole around slander and libel, which the author points out with Damien Matthews or whatever in the example. Completely legal and now you have artistic freedom to reshape the story however you want. The person with the most resources to fight legally typically wind here.

Throwing some disclaimer line in like 'this is no based on actual people or events' or whatever seems to give far too much of a liability waiver. It's really just plain wrong and the author makes a great point about naming an event and agency. Branding is very powerful and can create subconscious links that otherwise shouldn't exist. Naming is a bit tricky though because you often pick an easy memorable name to associate with something. Naming sort of act like a hash map with collision handling in my brain.

When I see Bill Clinton's name or 'Clinton' a whole slew of thoughts and memories link to that name or phrase and it can be difficult to determine what someone says. When you say Monica Lewenski's name on the other hand, she acts as a memorable unique identifier to the event, unfairly to her. I know exactly what you're talking about and I know about Bill, power differentials at play, and so on but the name needs to be unique and memorable in language. As the author points out, this naming convenience comes at a cost to those who might get improper associations for responsibility, so it's complicated. I think we should strive for branding that leaves out names where possible. Watergate seems like a great branding job, I immediately know it's Nixon and it doesn't dissolve him of any responsibility. Should the facts change and I read about Watergate later, say it was Deepthroat actually responsible somehow, the name Watergate name still exists and associations of responsibility in the future can change. Abstract your branding to avoid finger pointing.

briffle(10000) 4 days ago [-]

That would quickly be abused to silence critics as well. Imagine if Brock Turner trademarked his name, and went after anyone discussing his story.

Not to mention, your name is not unique. I know of at least 2 other people in the US with the same first and last name as me.

dooglius(10000) 4 days ago [-]

I didn't get the impression the film used her name

sneak(10000) 4 days ago [-]

You don't own your reputation. That's out of your hands. You can not and should not be able to control what other people think and say about you.

jimhefferon(10000) 4 days ago [-]

Non-sarcastic question: suppose I trademark my name and then someone with my name opens an account on a web site, or a resturant, or writes a script. Are they in violation? (It is strange to me that, say, Twitter is a worldwide namespace. We had a resturant in our East Coast US town that had to change its totally-boring name because they were threatened by a West Coast resturant with that name.)

octopoc(10000) 4 days ago [-]

Can't she sue the movie producers of Stillwater? Even if the movie producers didn't officially acknowledge inspiration, Vanity Fair mentions it:

> This new film by director Tom McCarthy, starring Matt Damon, is "loosely based" or "directly inspired by" the "Amanda Knox saga," as Vanity Fair put it in a for-profit article promoting a for-profit film, neither of which I am affiliated with

There is clearly precedence for this type of lawsuit because of this: https://en.wikipedia.org/wiki/All_persons_fictitious_disclai...

karaterobot(10000) 4 days ago [-]

The court case is in the public record, they don't use any real names, and they made it clear that it is not meant to be her story, so I would assume they are covered.

I also suspect Dreamworks' lawyers would have briefed the director on what he could and couldn't say in an interview if there was any danger of them getting sued.

stjohnswarts(10000) 4 days ago [-]

No she can't, the story is far too different. It only touches on a few major points of her story but is 90% different, it would never hold up in court and she would be out court costs and probably financially ruined after the trial was over.

ghaff(10000) 4 days ago [-]

Your link explains why that disclaimer is now routinely used.

gumby(10000) 4 days ago [-]

I think she became a public figure (through no fault of her own, as far as I can tell) and thus would unfortunately likely not be afforded the libel protections of a private citizen.

jnwatson(10000) 4 days ago [-]

Law and Order would have been sued out of business. The 'ripped from the headlines' mechanism is a tried and true method for TV procedurals.

You might own your likeness, but you don't own your life story.

Historical Discussions: The mermaid is taking over Google search in Norway (July 29, 2021: 897 points)

(898) The mermaid is taking over Google search in Norway

898 points 6 days ago by oarth in 10000th position

alexskra.com | Estimated reading time – 8 minutes | comments | anchor

Recently I've started seeing a lot of spam in Google Search in Norwegian. I'm not talking about a bad result here and there that ranks terrible. No, I'm talking about a large-scale spam operation that I've noticed more and more in recent days.

It's so bad that I'm convinced that this one spam domain is getting a large cut of all Google Search traffic in Norway. I can search for basically anything and find it in the first few pages with a very high probability.

The domain they use right now is a danish one, havfruen4220.dk. ("Havfruen", meaning "the mermaid")

You're greeted with this image if you visit it directly:

It seems like whoever is behind the site has been expecting us.

So I found a site with a silly domain and a silly image that ranks for things on Google Search. Do I have some examples? I sure do.

Example searches

Let me demonstrate the scale of the issue with some example searches.

A search for one of my brands? Yep, the spam site ranks as number 10. A large IT consultant firm in Norway(and some other countries)? Number 11. Local newspaper? Top of page 3.

Let's try another one. Let's try "REMA 1000" (the largest grocery store chain in Norway). Sure enough, on top of page 5, we have this:

A Google search result for "REMA 1000" that points to the spammer's domain.

Let's try something completely different and random. Maybe something people are wondering about. Let's try "How often" and let Google pick a thing for us.

Google suggestions for "How often" in Norwegian.

So the first suggested result is "how often should you shower" in Norwegian. Let's try it.

Google search result for "How often should you shower".

Sure enough, it's on the first page.

What about "How to calculate percentage" in Norwegian(Google's top suggestion for when starting typing "Hvordan")

"How to calculate percentage" is Google's top suggestion for "How".

Of course, on the first page, we get:

The mermaid got content on calculating percentages too.

What about "How often does apple update iOS"("hvor ofte oppdaterer apple iOS")?

Result nine and ten:

The mermaid even got content about iOS.

Let's take a look at results from just the mermaid:

Just a casual 9,95 million pages, according to Google.

There is a lot of pages. Most of them seem to be relatively new and created in the last few days.

How is the content generated?

Just by looking at the results, it doesn't make much sense. It's scraped from a bunch of different places. Something is from Twitter, something is from news sites, and something is from other random websites. Content seems to be combined from multiple sources. The page is served with Cloudflare.

Searching for exact strings reveals that there are more domains used. All of them use the TLD .dk.

The thing is, there is no actual content available to us if we visit the page. The page uses cloaking and probably only shows content if you're visiting from Google crawler IPs.

If I pretend to be Google by changing my user agent I just get the silly image I showed you. If I remove the GoogleBot user agent there is one difference: Javascript is inserted that redirects the user to another page:

var b = 'https://havfruen4220.dk/3_5_no_14_-__1627506246/gotodate';
    (/google|yahoo|facebook|vk|mail|alpha|yandex|search|msn|DuckDuckGo|Boardreader|Ask|SlideShare|YouTube|Vimeo|Baidu|AOL|Excite/.test(document.referrer) && location.href.indexOf('.') != -1) && (top.location.href = b);

How do they profit from this?

After the first redirect, that page redirects the user, again, to other scam domains. Some fake news sites pretend to be one of the most popular Norwegian news sites; others are basic "want to earn money fast online" sites.

This is one of the pages asking if I've ever earned money online and that registration is free right now. After a few "do you want to earn £25k a month" questions, you're redirected to another domain.
This site is the one the previously shown one redirected me to. Clearly, they want me to purchase bitcoin with them, and that's a great idea at it's hitting $500K soon.

Other sites are news sites like this:

The website pretends to be "Dagbladet"(Norwegian news site).

The fake news article is usually something like "this Norwegian celebrity reveals how he got rich". And it usually ends with some crypto scheme.

These scams are old, but they usually don't rank well. I've never seen anything like this. It's currently a top result for nearly anything on Google search in Norway.

How are they ranking so well?

We have all probably heard that Google's ranking is advanced and that it's pretty hard to fool it. Someone made it. I'm not going to pretend I know how they did this, but I think I have some ideas.

First of all, the content seems somewhat decent in google search at times, and I've clicked it multiple times myself when searching for things. When you click it, it does what it can to block browser navigation so you can't return to google. The content is also so clearly a scam that I had to read some of it for fun.

I think that Google uses stats on whether the user continued checking more results for that specific search query to determine if the visited result answered the user.

When the website blocks you from going back, Google might think you found the thing you were looking for and use this as a positive signal. This way, the site ranks you even higher. I often forget my exact search query, so I usually don't search again with that exact phrase if I'm blocked from going back.

How can Google fix this?

The simple solution would be to test sites regularly with an unknown IP and common user agent to check that a site isn't just showing content to Google and gives real users something completely different. That would stop this.

Another thing is that an alarm should probably go off when a new domain takes off like this. The havfruen4220.dk domain is shown for basically anything, so it wouldn't surprise me if it's the most shown domain in Google search in Norway right now.

How would you make a profit if you could rank for basically everything on Google search?

Thank's for reading this weird and messy blog post. I just wanted to share the weirdness. Have a great day! 😊

All Comments: [-] | anchor

hoppla(10000) 5 days ago [-]

The recaptcha process should be reversed. The sites should prove to humans that it's content is not generated by bots.

ant6n(10000) 5 days ago [-]

Perhaps a search engine that deranks pages that monetize visits (like ads) would be a good first step.

fergie(10000) 5 days ago [-]

Norwegian here- I haven't seen this at all- maybe the author has been somehow 'fingerprinted' and targeted?

aembleton(10000) 5 days ago [-]

Have you tried in a private window to check that you're not fingerprinted?

Ueland(10000) 5 days ago [-]

I have some experience on this field. Around two years ago i was a DevOp for the company running Dagbladet, Norways #2 newspaper. One of the things I did was keep an eye on mysterious traffic.

I managed to find a huge spam network that set up a proxy service that delivered normal content, but injected 'you can win an iPhone!' spam to all users visiting them.

Since I was in the position of being able to monitor their proxy traffic towards many sites I managed. I could easily document their behaviour.

In the same time, I wrote a crawler that visited their sites over a long, long time. I learned that they kept injecting hidden links to other sites in their network, so I did let my bot look at those also.

By this time, I also got a journalist with me that started to look at the money flow to try and find the organisation behind it.

My bot found in excess of 100K domains being used for this operation, targeting all of westeren Europe. All the 100K sites contained proxied content and was hidden behind Cloudflare, but thanks to the position I had, I managed to find their backend anyways.

We reported the sites to both CF and Google, and to my knowledge, not a single site were removed before the people behind it took it down.

Oh, and the journalist? He did find a Dutch company that was not happy to see neither him or the photographer :)

dylan604(10000) 5 days ago [-]

That sounds like a hell of an investigation, and now my curiosity is running. 100k domains sounds like an huge amount of logistics on their side to keep it all running. It would be interesting to read about how a spam company manages that kind of infrastructure compared to a 'legit' company.

Legit company will always have internal struggles between dev/sales/marketing, so things just take longer and are much more draining to accomplish. I'd imaginge spam org just needs to have bare minimum stuff up to satisfy whatever need it is they have knowing that humans won't necessarily be perusing those domains, yet it's 100K domains. I could almost see something like this running more smoothly. I can also see it being run by small number of people that let things lapse and it's just barely hanging together. So many questions...

pepy(10000) 5 days ago [-]

Do you want to get to the bottom of this? A friend of mine is a top Dutch lawyer with an interest in these things.

avian(10000) 5 days ago [-]

> We reported the sites to both CF and Google, and to my knowledge, not a single site were removed before the people behind it took it down.

As someone that tried reporting spam sites because they were using content scrapped from my website, I'm not surprised.

Cloudflare has a policy that they will not stop providing their IP hiding/reverse proxy services to anyone, regardless of complaints. The best they do is forward your complaint to the owner of the website, who is free to ignore it.

They say 'we're not a hosting provider' as if that's an excuse that they can't refuse to offer their service. I'm sure many spam websites would go away if they couldn't hide behind Cloudflare.

ultimoo(10000) 5 days ago [-]

> By this time, I also got a journalist with me that started to look at the money flow to try and find the organisation behind it.

Very curious to know what you found!

lifeisstillgood(10000) 5 days ago [-]

Can I just clarify?

There is / are organisations that a) scrape legitimate sites for content, b) host that content on their own 100K domains, c) sit behind cloudflare, d) do some seo??? e) when someone finds their site they then inject an ad or similar rubbish f) do this enough that they make money off the ad / competition / porn ?

That seems like a problem that the "original-source" metatag was supposed to stop?

tikiman163(10000) 5 days ago [-]

The reason you found so many domains is that they intentionally take down thier spam sites and reload them under a new domain every few hours. They do this so they can't be taken down by people reporting them as spam. They literally setup the next domain while the current one starts being used so they can do a live swap to the next one without interruptions to thier spam operations. This is typically done in an effort to spread Trojan malware to anybody running computers with out of date operating systems and browsers. Windows getting people off of Internet Explorer has been a huge hit for them as it reduces the number of possible vulnerabilities someone might have when they get sent to one of these Trojan spam sites.

chovybizzass(10000) 6 days ago [-]

I've been using https://search.brave.com for a few weeks. Most of the time I find what I need.

devmunchies(10000) 5 days ago [-]

Yes me too. have like it better than DDG.

WarOnPrivacy(10000) 5 days ago [-]

Their news scroll is also better than average.

keyme(10000) 5 days ago [-]

Google search has progressively deteriorated in quality over the last 10 years, to the point where I see it becoming useless in the relatively near future. And it's mainly not even their fault.

I've been using Google search for all kinds of research for 15 years. There used to be a time when you could find the answer to pretty much anything. I could find leaked source codes on public FTP servers, links to pirated software and keygens, detailed instructions for a variety of useful things. That was the golden age of the web.

These days, all the 'interesting' data on the Internet is all inside closed Telegram chats, facebook groups, Discords or the rare public website here and there that Google doesn't want to index (like sci-hub, or other piracy sites).

The data that remains on SERPs is now also heavily censored for arbitrary reasons. 'For your health', 'For your protection'. Google search is done.

cratermoon(10000) 5 days ago [-]

Whether or not it's Google's fault depends on how much you attribute the development of the advertising-driving distraction factory internet to Google's business. We can debate whether or not Google was ever really in the search engine business – certainly at one point the search was a useful tool. Today, Google search is a sort of glorified Yellow Pages*. Their main product is selling ads in this Nouveau YP. The results their search engine returns are now heavily skewed towards revenue-generating sites. Such sites may incidentally be informative, but they are generally selling something.

Edit: see this other HN story: https://news.ycombinator.com/item?id=27993564

This is not to say that all search results are bought, although of course those are present now, too. But overall Google presumes that whatever the user is searching for, the best result is one where the answer is 'buy this thing'.

For those search results that don't lead directly to commercial products, the revenue generation is indirect: through the collection of user preferences and activity, Google can refine its search results towards maximizing revenue. At the very least, the result is likely to be a site that has ads, some of which generate revenue directly for Google.

*In the old-fashioned Yellow Pages book, you couldn't really "search," but there was an index by category. It had many of the issues inherent in categories, but it didn't take an expert to find things. Google search eliminates the needs for anyone to understand a taxonomy of businesses.

omega3(10000) 5 days ago [-]

> And it's mainly not even their fault.

It's precisely their fault, they've created an environment that incentivizes low quality, irrelevant content and are actively hostile towards users. Two examples just from top of my head: ignoring the country website, previously if you wanted to search only local news it was very easy to do. Another was ignoring completely the exact phrase search with double brackets.

smusamashah(10000) 5 days ago [-]

You should try yandex.ru for all that interesting stuff. They don't censor any of it.

fukmbas(10000) 5 days ago [-]

You're delusional if you think Google search is going anywhere lol

Google search used to include discussion. They'll bring it back

kkoncevicius(10000) 5 days ago [-]

Google seems to also place less emphasis on search phrases. When searching for exact article names I easily find them on DuckDuckGo, but not on Google. Two recent search-term examples:

1. the scientific worldview needs an update

2. from reproducibility to over reproducibility

jacobolus(10000) 5 days ago [-]

Google scholar search is still very useful.

DuckDuckGo is nowadays more useful than Google for my web searches.

IfOnlyYouKnew(10000) 5 days ago [-]

If 90% of what you're searching for is keygens and „inside closed Telegram groups", it might just be time to grow up?

mojzu(10000) 5 days ago [-]

I think it depends on what you're searching for, for dev related stuff no other search engine I've tried comes close. But there are whole industries now that are so heavily SEO'd that finding useful information without knowing the exact keyword to search for is incredibly frustrating

nuker(10000) 5 days ago [-]

> Google search has progressively deteriorated in quality

49 out of 50 review sites are now just affiliate links to Amazon. "Check the price on Amazon" buttons is the main content there

Adrig(10000) 5 days ago [-]

One of the last use case for Google is being a proper search engine for Reddit. But I think they are aware of their downfall, that's why the top of the page is increasingly taken by their widgets to provide directly the information.

On the other hand, Youtube is the second most popular search engine and I don't see it slowing down. What an insight they had when they bought it.

Edit : I entirely agree to the fact that valuable information is found more in communities nowadays. I also predict that the web in 5 years will be mostly explored through communities

herbst(10000) 5 days ago [-]

Google only recently started to totally butcher the Swiss search results. For some reason I could still find direct download links to movies and music a few years ago (kinda legal here).

Now such search results often don't even get a second page...

Crazyontap(10000) 5 days ago [-]

Can somebody else who is in Norway can confirm this? It could be simply be a malware injecting this. Would be great to eliminate this possibility

intarga(10000) 5 days ago [-]

I'm in Norway, and I tried the first search "rema 1000" without getting any spam results on the first two pages...

That doesn't entirely eliminate the other possibilities though, google search isn't deterministic, and the domain could have been reported since the article went up.

javier2(10000) 5 days ago [-]

It is not happening with the example from the article for me, but I have seen this practice ruin my search results in varying degree over the past 6 months. Some times entire keywords will just be broken because there are so many fake sites.

Ueland(10000) 5 days ago [-]

Can confirm this is not malware, Google has a huge spam problem, see my previous comments.

probably_wrong(10000) 5 days ago [-]

I tried four of the queries from Germany using a private window. 3 returned results from themermaid on the first page.

In particular, the only results ranking higher than themermaid for 'hvor ofte oppdaterer apple ios' are those coming from support.apple.com.

knidoyl(10000) 5 days ago [-]

I'm in France and shearched for the how often thing, it returned themermaid on second page

sleepyhead(10000) 5 days ago [-]

It's not showing for the example search (Rema 1000) for me right now, but I did a search yesterday, about a person/company and the result was news related content, and ended up with a site with the same image. However I can't find havfruen (mermaid) in my browser history so they must use other domains as well.

londons_explore(10000) 5 days ago [-]

It's the hooking of the browser back button in a way that Google does not detect which is the real 'trick'.

Anyone who can do that can rank as high as they like for any search query.

londons_explore(10000) 5 days ago [-]

To expand on this: A very strong ranking signal is how many of the users that click a search result are sufficiently satisfied with the information they have found to end their search.

A good proxy for this is how many people don't click the 'back' button to see other results.

Google is already aware of sites which hijack the back button. Their crawler detects this, and if they find it, they throw out the figures of how many people click the back button.

So if you can find a way to hook the back button so nobody can click back, while stopping google thinking you have hooked the back button, then your page will keep creeping up the rankings.

Google detects back button hijacking with their crawler (by rendering the page in Chromium and seeing the effect when hitting the actual back button), but this is circumvented by presenting the crawler different html. (or making sure the page behaves differently in their crawler, potentially by checking things like the model of the graphics card - googles crawlers don't yet support most of WebGL 2.0, and also simulate playing audio at the wrong rate)

Google also detects how many real users click back. If it's zero, then thats a warning flag. So I'd guess the back-hijacking logic is only activated ~80% of the time.

Schnurpel(10000) 5 days ago [-]

If I would run a global infrastructure company like Cloudflare, I also would not take any sides, and leave my service open to anyone. The world is full of people who get upset about something. However, if I declare a hands-off policy, it must be truly hands-off. Cloudflare kicked off Switter https://www.theverge.com/2018/4/19/17256370/switter-cloudfla..., it banned 8Chan https://blog.cloudflare.com/terminating-service-for-8chan/ , it banned the Hacker News https://mobile.twitter.com/thehackersnews/status/66900183605... . That's not how hands-off works.

notRobot(10000) 5 days ago [-]

To be clear, that's not HN, but The Hacker News, a different website, known for... dubious reporting.

wdrw(10000) 5 days ago [-]

Interesting, the image seems to contain characters from a Russian childrens' cartoon ( https://en.wikipedia.org/wiki/Kikoriki )

incrop(10000) 5 days ago [-]

And the girl on the left is from comedy clip 'Foreign language courses in Balashikha' https://youtu.be/wrYFUBA2kUA

mads(10000) 5 days ago [-]

They are using different images. A month or so ago it was some guy tied up on a chair with some russian text on top of the image.

There are a lot of these domains (ptsdforum.dk, verdes.dk, momentsbykruuse.dk from the top of my mind). Always Danish domains and always registered by the same person in Riga.

aasasd(10000) 5 days ago [-]

Yup, these very guys: https://s5.cdn.teleprogramma.pro/wp-content/uploads/2020/04/...

A rather non-sequitur choice, like everything else with this thing I guess.

janmo(10000) 5 days ago [-]

I've seen the same here in Germany but they do appear only if you use the results within the last 24h functionality. It looks like the German content is generated through GPT2 or 3. It makes no real sense if you read it. If you go on the page you are immediately redirected to a scam just like the article mentions. Interestingly they use '.it' domains here. It also looks like the domains might have been hacked or are expired domains that have been bought.

For example if you check havfruen4220.dk on archive.org you can see that it appears to have been a legitimate business website before. https://web.archive.org/web/20181126203158/https://havfruen4...

How do they rank so well?

I've checked the domain on ahref and it has almost no backlinks. But if you look closely you will see that all the results that rank very well have been added very recently. On the screenshots in the article you can see things like 'for 2 timer siden' which means 2 hours ago. It looks like google is ranking pages that have a very recent publishing date higher.

Edit: Here is what the content of such a site looks like: https://webcache.googleusercontent.com/search?q=cache:Bk0VsM...

ROARosen(10000) 5 days ago [-]

Seems like not a new thing. Here is a warning tweet from beginning July from Danish Cybersec guy @peterkruse who saw his name coming up for a different domain owned by the same registrant as havfruen4220.dk


NorwegianDude(10000) 5 days ago [-]

.it pages are used in Norway too, but I'm not sure it's something GPT-ish that's being used. Whole sentences are copied word for word from other articles.(might be a small dataset it's trained on?)

It could of course be that its something similar to GPT that is trained on all the content it could find and then writes articles, cause it's clearly messing up sometimes, form the small piece of content available at the search results page.

I'm not sure if this is an ML race and the reason we're not seeing the same thing in English is because Google might understands English better than spammers. While in Norwegian and German it's the other way around?

Clearly freshness is a large part of it. Google seems to have indexed millions upon millions of pages tied to this in the last 24 hours.

e_carra(10000) 5 days ago [-]

I had similar experiences with: https://www.xspdf.com/resolution/51859292.html

The content seems taken from other websites and mixed in a nonsensical way. It comes up frequently in my search results. www.xspdf.com has completely unrelated content and seems a separate business.

kostecki(10000) 5 days ago [-]

This definitely looks like an expired domain that was bought. Havfruen seems to be a restaurant in the city of Korsør - which conveniently have the postal code of 4220.

nmstoker(10000) 5 days ago [-]

I presume 'GPL' was an autocorrect from the intended 'GPT' right?

adventured(10000) 5 days ago [-]

Typically Google has a warming/trial period for new large content sites, after their search bot is introduced to the content and has spidered its way through the site.

For example there used to be a very common content farm system, that was structured like like this:


So when people searched for sites by domain name, the zillions of low traffic long-tail results of this farm system would be all over Google's results.

What it would present on the page is a mess of data about nytimes.com, such as traffic, or keywords pulled from the site header, maybe a manufactured description (or pulled right from the site head), sometimes images / screenshots of the site. Anything that could be stuffed in there to fill up enough content to get Google to not do an automatic shallow content kill penalty on the content farm. This worked for several years very successfully until Google's big algorithm updates, 9-10 years ago or whatever now (Penguin et al.). You could just build a large index of the top million domains (eg Alexa and Quantcast used to provide that index in a zip file), spider & scrape info from the domains, and build a content farm index out of it and have a million pages of content to then hand off to Googlebot.

So initially such a farm will boom into the search rankings, Google would give them a trial period and let out the flood gates of traffic to the site. Then Google would promptly kill off the content farm after the free run period expired and they had figured out it was a garbage site.

I still occasionally see this model of content farm burst up into traffic rankings, and it's usually very short lived. It makes me wonder if that's not more or less what's going on with the Mermaid farm.

MrUnderhill(10000) 5 days ago [-]

Interesting, I've been seeing the same spam for Norwegian searches, but with the domain nem-multiservice dot dk, or nem-varmepumper dot dk - presumably another legitimate business' domain that expired and was grabbed by the scammers. Visiting those domains show the same graphic as shown in the article.

Almost any search in Norwegian will have obvious scam sites like these in the top 10 results.

Other domains part of the same scam that show up in my results today: mariesofie dot dk, bvosvejsogmontage dot dk

I wonder if it is related to this: https://www.dk-hostmaster.dk/en/news/dk-hostmaster-takes-102...

fny(10000) 5 days ago [-]

Somewhat related: has anyone else noticed a massive change in breadth of results? I was searching for reviews for diving equipment and some less niche items and I feel like I'm being spoonfed results from the same comparison engines. Since when did algo content become king?

yojo(10000) 5 days ago [-]

This exactly. I've been researching specific house repair issues and just get nothing but content spam. Whenever I want specific information I find myself adding "reddit" to the query string, which will usually turn up a thread with links out to the actual answer.

gomox(10000) 5 days ago [-]

I couldn't agree more. More and more lately I've felt like the Altavista days. I know the information I'm looking for is out there, it's just not in the Google results page, which is plastered with unreadable stuff (paywalls, content farms), crap 'content cards' in the results page, and sneakier and sneakier ads.

I'm not sure what the beginning of the end was for Google Search, but I think the day where they changed the ad background to white is a good candidate.

Google Search used to be like Chrome or Gmail - we know its wrong in the long term, but it's hard to stop using it because it just works so well.

But these days, not anymore. Search is a lot less sticky, and it is their golden goose they are messing with here.

jhoechtl(10000) 5 days ago [-]

Searching in Google has become all about shopping. Pure and relevant content is hard to find.

Even today here are bloggers outside who do not have a commercial affiliation with the goods/items/things they are blogging about. Such content is practically impossible to find in comparison to all the Amazon-affiliated pseudo-information conveying spoof-sites.

estebarb(10000) 5 days ago [-]

I feel the same. Looking for specialized topics with Google is now very difficult. Now is impossible to look for phones, uncommon words or looking for anything that is not the mainstream result.

I'm not sure if the culprit is BERT or using neural ranking. But in the last years I feel that is more common that I leave Google search without useful information. The worse part is that all the competing search engines are using the same algorithms that are only useful for mainstream results.

ajsnigrutin(10000) 5 days ago [-]

Atleast you get the results you are looking for... I search for three keywords, and it chooses to ignore the two specific one, and show only the one general one (while puting a line under the search result, that the result does not contain some keywords).

Basically, like searching diving suit thickness, and google ignoring 'suit' and 'thickness' (until i specifically put those two words in quotemarks), and only showing me results for diving.

alfiedotwtf(10000) 5 days ago [-]

I'm just sick of seeing pintrest and quora as the top 8 results :/

juskrey(10000) 5 days ago [-]

Simply, Google have lost the battle against SEO long ago, and, being in a trap of own cash flow, can't do anything radical to change that.

YeBanKo(10000) 5 days ago [-]

I have been struggling with the same issue recently. Results are much more narrower and they seemed to be leaning towards consumer goods items. Though I don't remember when I ever bought something coming from Google search.

mdolon(10000) 5 days ago [-]

I wrote a blog post complaining about this early last year: http://mdolon.com/essays/amazon-has-ruined-search-and-google...

The Amazon affiliate program is definitely contributing to this problem.

pjmlp(10000) 5 days ago [-]

Same here, I no longer can find anything sensible on Google, regardless how much I try to customize the search expression.

Additionally as polyglot it is very irritanting that Google tries to helpfully translate queries for me, thus I have to go to other search engines to actually find the article on the language I want.

weird-eye-issue(10000) 5 days ago [-]

Some data on their traffic from some SEO tools I pay for:

Ahrefs: 230k organic traffic valued at $124k SEMRush: 558k organic traffic valued at $355k

These are estimates and can be widely under or overestimated but they show that this is happening on a very large scale.

For a quick idea on how this is possible I looked at their top pages (according to Ahrefs). Their top page is ranking #2 for the keyword 'interia' which has 207k searches per month in Norway and is rated as 0 (out of 100) for being easy to rank for. Usually when a keyword has that amount of searches it would be incredibly hard to rank for, I've never seen anything like this. So what is happening here looks like they are just taking advantage of a market with really low competition keywords.

NorwegianDude(10000) 5 days ago [-]

Interia is a large polish web portal, from what I could find. Norwegian people doesn't know it, but polish people might. There is probably around ~2 % polish people in Norway. It also ranks as #1 for me. It's in polish too, so basically only ~2 % of Norway would understand it.

However, the weird thing it that it steals content from articles, and then outranks them. Most pages seems to be boosted, maybe as a result of it being new. (Most content is just hours old)

Could you check these too? (exactly the same thing, but newer, it seems) www.mariesofie.dk nem-varmepumper.dk

Clearly reused domains.

Ueland(10000) 5 days ago [-]

Sidenote but what do you think about Ahrefs? I'm doing some tests to see how easy it is to get ranked for keywords (with actual helpful content, not crap like this thread is about), but i find the Adsense keyword tool not that helpful as they delete many keywords when you search for them, which kinda voids that tool.

But I currently feel that paying $100/mo for Ahrefs for something I do as a side project is a tad wasteful.

ricardo81(10000) 5 days ago [-]

Poor man's cloaking

curl -A 'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' 'https://havfruen4220.dk' > 1.html

curl -A 'Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)' 'https://havfruen4220.dk' > 2.html

diff 1.html 2.html 7d6 < <script>var b='https://havfruen4220.dk/3_5_no_14_-__1627553323/gotodate'; ( /google|yahoo|facebook|vk|mail|alpha|yandex|search|msn|DuckDuckGo|Boardreader|Ask|SlideShare|YouTube|Vimeo|Baidu|AOL|Excite/.test(document.referrer) && location.href.indexOf('.') != -1 ) && (top.location.href = b); </script>

gvb(10000) 5 days ago [-]

The 'diff' output (above) needs an extra line break to avoid HN automatic line wrapping. The output of the diff command is:

diff 1.html 2.html

7d6 < <script>var b='https://havfruen4220.dk/3_5_no_14_-__1627553323/gotodate'; ( /google|yahoo|facebook|vk|mail|alpha|yandex|search|msn|DuckDuckGo|Boardreader|Ask|SlideShare|YouTube|Vimeo|Baidu|AOL|Excite/.test(document.referrer) && location.href.indexOf('.') != -1 ) && (top.location.href = b); </script>

monday_(10000) 5 days ago [-]

Not sure how relevant this is, but the animal characters in the top image are from a Russian children hit cartoon 'The Smesharicks' (literally 'The Laughballs').

snickersnee11(10000) 5 days ago [-]

Also, the left image of a woman from a russian meme from a tv show.

knolax(10000) 5 days ago [-]

More reasons why a global search monopoly is suboptimal. Smaller markets like this are just going to get neglected and maintained just enough that a better alternative can't compete. Google search is basically useless for any language other than English.

aembleton(10000) 5 days ago [-]

Surprisingly no one has created another search engine that targets another language other than yandex

evolve2k(10000) 5 days ago [-]

Before I accessed the article I was hopeful from the title that "The Mermaid" was some hot new search engine out of Norway.

dotcommand(10000) 5 days ago [-]

Same here. But sadly the title would have been 'Google purchases 'The Mermaid' for $X'... Given their near $2 trillion market cap, I doubt any search engine would be allowed to stay hot for too long.

mromanuk(10000) 5 days ago [-]

Same for me

jessaustin(10000) 5 days ago [-]

TFA talks about Google testing with 'unknown IP', but doesn't mention any testing done by the author with cookies cleared or in incognito mode. This seems basic.

finnh(10000) 5 days ago [-]

What do you expect incognito to change? That would presumably show the same content the author is seeing. Only Google sees the content that drives the ranking.

It is Google that needs 'incognito' mode, not the author.

onepunchedman(10000) 5 days ago [-]

Wow, the Norwegian on those scam web sites is actually perfect. Never seen that before.

Ueland(10000) 5 days ago [-]

That's because it's real content that they have stolen and just republished. In SEO circles one like to say that original content is king. Well, not so much after all.

tikiman163(10000) 5 days ago [-]

I'm kind of curious why he's so concerned about this? They've never managed better than ninth most relevant and in most cases they didn't even make the first page of result. Any advertising person will tell you, if you aren't in the top 3 results (basically the top result now that paid ads automatically get the top 2 spots on nearly all searches) your odds of being seen and clicked on drop to almost nothing.

Are they potentially doin harm? Sure. Have the successfully managed to trick anybody with this? I'd be extremely surprised if they're getting more than a dozen people clicking through from being the ninth result in a day,and when people see they've been redirected to an advertisement the majority of people immediately click away.

This isn't like clicking on a fake prorn site that redirects to cam girls with viruses hidden in all the downloads. It's random unrelated searches redirecting you to blatant ads for cryto currency. The kind of people who are young enough to know what crypto currency is and how to buy it, also know how to spot a redirect to a fake website.

burnished(10000) 5 days ago [-]

These kinds of scams are a stochastic process. They don't work on your average person, they only work on vulnerable people. Heres the catch though, everyone is vulnerable at some point in their lives. This is where the stochastic process comes in, they don't need to get you when you're strong, they just need to test enough people enough times to catch them in a vulnerable moment.

dhosek(10000) 5 days ago [-]

The ones thing I want more than anything from google or DuckDuckGo or anyone really is the ability to give a list of domains and never have their results show up in my searches. I know I can do this on a per search basis but I want it to be a configurable setting.

niutech(10000) 2 days ago [-]

Just filter out results using uBlock Origin like this:

mattwad(10000) 5 days ago [-]

UBlacklist is a plugin that does this. It's so great to be able to hide all those sites that just cache Git issues and SO posts.

eitland(10000) 5 days ago [-]

I used to have a text document on my desktop containing a list of domains that contained autogenerated content, each with a minus in front, like:

-stupidautogeneratedcontent1.com -stupidautogeneratedcontent2.com etc

I figured sooner or later Google would pick up the signal but I think instead they just started ignoring my '- requests' as I stopped using them. edit: or maybe they fix the problem. Spam sites used to be a problem during the early decline of Google. I think what happened was that problem actually almost disappeared for me and was replaced by irrelevant results from non-spam-sites

Edit: mahalo.com was one of those, https://en.m.wikipedia.org/wiki/Mahalo.com

gnyman(10000) 5 days ago [-]

Pet theory (disclaimer that I know very little about SEO) would be that the website with the cloned content loads fast and does not load 4 MiB of javascript, thus beating the original content in ranking mostly because of the speed, which is I believe a important factor in Google rankings (and getting more important).

And add to that the some link spam and preventing the visitors to return not get any bounce back...

Either way, I can't help to be a bit impressed by the SEO spammers outsmarting the people at Google. (Edit: and I don't mean to say they are smarter or anything, just that they only need to find one weakness in the algorithm while the people working to improve it needs to make it works for everything.)

jmiserez(10000) 5 days ago [-]

Once the hard requirement on speed impacts the quality of results it no longer helps me as a user. I'd rather have the sites invest their time in good content and wait a few seconds rather than get fast but low quality SEO-ed results. Same with AMP, the quest for speed doesn't make my experience faster if I still load the original page (which is often still necessary).

paxys(10000) 5 days ago [-]

I doubt it's some crazy sophisticated SEO hijacking operation. Probably a result of a small data set (Norwegian language web pages), specific search terms (Norwegian brands, companies), and lots of keyword stuffing. Most of the examples the author pointed out were from pages 5-10 of Google results, which are probably worthless for ad revenue anyways.

tyingq(10000) 5 days ago [-]

It does rate a pretty good chuckle recalling old Google blog posts about their various uber-sophisticated anti-spam ML algorithms and how black hat SEO just wasn't possible anymore.

Osiris(10000) 5 days ago [-]

He specifically pointed out that it's ranking in the top 10 for nearly every search he did.

rchaud(10000) 5 days ago [-]

This type of scraped-content websites were common for English language searches back in 2010 or so. I believe the 'Panda' algorithm update eliminated them from English searches.

nkozyra(10000) 5 days ago [-]

> The simple solution would be to test sites regularly with an unknown IP and common user agent to check that a site isn't just showing content to Google and gives real users something completely different. That would stop this.

Surely Google does this, right? Given that - in theory - showing different content to Google versus non-Google should result in a penalty, anyway ...

not2b(10000) 5 days ago [-]

The problem is that paywall sites already do this: Google sees the article, others see a paywall.

agency(10000) 5 days ago [-]

This is only tangentially related but has anyone else started getting more obviously spam emails in their gmail inbox lately? I feel like for a long time I never got spam in my inbox but lately I'll get ones that seem like they should be easy to detect, talking about gifts and stuff and uSiNg wEirD capitals or s p a c i n g. Is it just me?

sp332(10000) 5 days ago [-]

Yes, and more non-spam email is getting filtered as spam. Also, a mailing list I was unable to unsubscribe from and marked as spam at least 5 times kept being delivered to my inbox.

philiplu(10000) 5 days ago [-]

Not just you. Something changed two or three months ago. Never really saw spam for years before that; now 3 or 4 mails a day.

beart(10000) 5 days ago [-]

I'll chime in as well. I forward everything from gmail to another account I have. I pretty much never got any forwarded email for years because the gmail account is only really used as an identity for google services. A few months ago I suddenly started to get a significant amount of spam forwarded for no known reason.

javier2(10000) 5 days ago [-]

Yes, a few days I've even had 5 different spam emails in the inbox.

matsemann(10000) 5 days ago [-]

Yeah, I've seen this domain a lot lately. But I've complained about the Norwegian results for years [0]. For most searches there will be a result that's just keyword spam ranking high. Retried my 'pes anserinus bursitt' search now 2 years later, and two results are spam from havfruen, and there are some other results from https://no.amenajari .org which is also just translated and scraped content for all languages google seems to love, as I've seen it for years. A third domain I often see as well is 'nem-varmepumper'. Apparently a site about heat pumps has content on everything.

Can't fathom Google not catching this..

[0]: https://news.ycombinator.com/item?id=21621099

porbelm(10000) 5 days ago [-]

When I try that search, havfruen is seventh place. NHI and other good results at the top.

YMMV a lot with Google results. For me, it's usually great where DDG is kinda crap, but not as bad as... shudder ... bing

rataata_jr(10000) 5 days ago [-]

Havfruen, brought to you by mountain trolls from Finmark.

sleepyhead(10000) 5 days ago [-]

There are no mountain trolls in Finnmark, they live further south as the mountains in Finnmark are not very big.

bigpeopleareold(10000) 5 days ago [-]

I hate dealing with this and now refuse to use Google now when I saw patterns in search results while I was researching common things (like housing) in Norwegian, here in Norway. I rarely use Google these days, but I thought for a second that Google might be better with search results than DDG in Norwegian, but this stuff is aggravating. This is one of those where they screw around with history that you just have to start fresh again on whatever you were doing instead of going back.

edit: one other thing I have seen, but it doesn't mean it is always spam. All The Words In A Title Are Capitalized - it's something to pay attention to whether it is spam or not. Conventionally, titles are usually not like that in Norwegian.

eitland(10000) 5 days ago [-]

> edit: one other thing I have seen, but it doesn't mean it is always spam. All The Words In A Title Are Capitalized - it's something to pay attention to whether it is spam or not. Conventionally, titles are usually not like that in Norwegian.

Another big one is that Norwegians like Germans write words together, just one example from one of the stupid ads: 'Spesial Reportasje' is a dead giveaway not only because of the capitalization.)

(Oh well, sadly because of pressure from Words incompetent spell checker over years and lenient teachers this is getting worse. I fear we are seing compound damage here as kids that got away with this are now becoming teachers...)

bigpeopleareold(10000) 5 days ago [-]

Just want to add to my comment also that it is not limited to havfruen4220.dk, but clarifies a general pattern. I tried a couple of search terms like 'mattilbud rema 1000' and found more .dk domains on the second page (nem-varmepumper.dk, humanrebels.dk) - two things that have nothing to do with food.

l0b0(10000) 5 days ago [-]

WHOIS shows it's registered four weeks ago by someone in Riga, Latvia.

rataata_jr(10000) 5 days ago [-]

Tal's ghost is trolling now?

qwerty456127(10000) 5 days ago [-]

For every country/market somebody should better make a search engine to compete with Google. Now this is a chance for Norway.

matsemann(10000) 5 days ago [-]

Used to have https://www.kvasir.no/ but now it's just a skinned Google.

sesam.no (not valid domain anymore) was an engine made by some a big Norwegian company back in 2005 or so.

Norway used to be big in search. FAST got acquired by MS back in 2008.

sleepyhead(10000) 5 days ago [-]

We had a fast one but Microsoft bought it and shut it down.

the_biot(10000) 5 days ago [-]

For all that Google search has been utterly crap for going on a decade now, I have to admit part of the reason is that they get targetted relentlessly by SEO spam operations like this. I like DuckDuckGo for now, but I imagine as they get bigger they're going to be a target for these kinds of spam just the same.

beebeepka(10000) 5 days ago [-]

Google search has been a brochure for a long time now

rvba(10000) 5 days ago [-]

Because they automated anything and you cannot contact any human from quality assurance.

raverbashing(10000) 5 days ago [-]

Even worse, getting this kind of spam through to DDG (Bing?) seems easier than on Google

It seems DDG is worse at finding the more authoritative sites about a subject compared to Google.

skinkestek(10000) 5 days ago [-]

> I have to admit part of the reason is that they get targetted relentlessly by SEO spam operations like this.

A bit of it is probably that.

Outright ignoring my queries: +, doublequotes, 'verbatim' and all takes more than SEO tactics, it takes someone inside Google, either malicious or more probably incompetent on the inside.

Or more probably: someone was so busy trying to use AI in searches that no they haven't had time the last ten years to consider if it was smart.

boomlinde(10000) 5 days ago [-]

> they get targetted relentlessly by SEO spam operations like this.

Why, though? There is an arbitrary ranking system that seems increasingly independent of what I actually searched for. Google had created a game where the winner isn't necessarily relevant or at all useful. It's inevitable that spammers will play that game.

fauigerzigerk(10000) 5 days ago [-]

Is there really any difference between DDG and Google when it comes to SEO spam? If there is, I sure haven't noticed in spite of using both, often for the same search terms.

It seems to me that the techniques used to spam Google's index work just as well on Bing's index.

qwerty456127(10000) 5 days ago [-]

I'm surprised to find out people actually return to the search results page using the back button. Whenever I am serious enough (enough to keep looking after the first link I click does not satisfy me) about finding something I always Middle-Click or Ctrl+Click the links to open them in new tabs.

chimen(10000) 5 days ago [-]

People are easily 'surprised' these days

TeMPOraL(10000) 5 days ago [-]

Artefact of mobile use perhaps? 'Open in new tab' is slightly harder on a phone or on a tablet.

hayksaakian(10000) 5 days ago [-]

Interesting because it shows that bounce-back is a more significant ranking factor than before.

It seems like they've manipulated rankings by locking people in to reduce their bounce-back stats (in addition to keyword-stuffed content)

FeepingCreature(10000) 5 days ago [-]

That seems automatically testable. Load the site in simulator, then look at the URL history.

ma2rten(10000) 5 days ago [-]

I don't think it necessarily shows that. Their good ranking could be completely unrelated to bouceback.

punnerud(10000) 5 days ago [-]

I live in Norway and don't have this problem now. I had a similar problem about a year ago on my MacBook Air because of some software that altered my Google results in all of my browsers. Don't remember the name of it, but something smelled fishy when the results was different from the ones on my phone.

oarth(10000) 5 days ago [-]

Pretty sure affects you too as it's the same for me, on multiple networks, multiple user-agents, multiple devices and so on.

Simply just trie one of the examples like 'hvordan regne ut prosent'(how to calculate percentages) or, I don't know...'DNB aksje'(DNB stocks, DNB being the biggest bank in Norway). Sure enough, both ranks on the first page or as the one of the top results. (One is now using the www.nem-varmepumper.dk domain, that is the same thing).

EDIT: Now the DNB one moved from 2 and 3 place to page 2. Things are moving around quickly.

Historical Discussions: New in Git: switch and restore (August 01, 2021: 817 points)

(852) New in Git: switch and restore

852 points 2 days ago by todsacerdoti in 10000th position

www.banterly.net | Estimated reading time – 5 minutes | comments | anchor

'When I see a door with a push sign, I pull first to avoid conflicts' - anonymous

For those that work with git for some time, it is not often that you get to discover new things about it. That is if you exclude the plumbing commands which probably most of us don't know by heart and most likely that's for the better. To my surprise, I recently found out about 2 new additions to the list of high-level commands:

To understand why they came to be, let's first visit our old friend git checkout.

Checkout this

git checkout is one of the many reasons why newcomers find git confusing. And that is because its effect is context-dependent. The way most people use it is to switch the active branch in their local repo. More exactly, to switch the branch to which HEAD points. For example, you can switch to the develop branch if you are on the main branch:

git checkout develop

You can also make your HEAD pointer reference a specific commit instead of a branch(reaching the so-called detached HEAD state):

git checkout f8c540805b7e16753c65619ca3d7514178353f39

Where things get tricky is that if you provide a file as an argument instead of a branch or commit, it will discard your local changes to that file and restore it to the branch state. For example, if you checked out the develop branch and you made some changes to the test.txt file, then you can restore the file as it is in the latest commit of your branch with:

git checkout -- test.txt

A method to the madness

If you first look at these two behaviors you might think that it doesn't make any sense, why have one command do 2 different actions? Well, things are a little more subtle than that. If we look at the git documentation, we can see that the command has an extra argument that is usually omitted:

git checkout <tree-ish> -- <pathspec>

What is <tree-ish> ? It can mean a lot of different things, but most commonly it means a commit hash or a branch name. By default that is taken to be the current branch, but it can be any other branch or commit. So for example if you are in the develop branch and want to change the test.txt file to be the version from the main branch, you can do it like this:

git checkout main -- test.txt

With this in mind, maybe things start to make sense. When you provide just a branch or commit as an argument for git checkout, then it will change all your files to their state in the corresponding revision, but if you also specify a filename, it will only change the state of that file to match the specified revision.

New kids on the block

So even if things may start to make sense after reading the previous paragraphs, we must admit that it is still confusing especially for newcomers. That's why in version 2.23 of git, two new commands have been introduced to replace the old git checkout(git checkout is still available, but people new to git should start with these ones preferably). As you would expect, they basically each implement one of the two behaviors described previously, splitting git checkout in two.


This one implements the behavior of git checkout when running it only against a branch name. So you can use it to switch between branches or commits.

git switch develop

While with git checkout you can switch to a commit and transition into a detached HEAD state, by default git switch does not allow that. You need to provide the -d flag:

git switch -d f8c540805b7e16753c65619ca3d7514178353f39

Another difference is that with git checkout you can create and switch to the new branch in one command using the -b flag: git checkout -b new_branch

You can do the same with the new one, but the flag is -c: git switch -c new_branch


This one implements the behavior of git when running it against a file. You can restore, as the name suggests, the state of a file to a specified git revision(the current branch by default)

git restore -- test.txt


These methods are still marked experimental, but for all intents and purposes they are here to stay so by all means I encourage everyone to start using them since they will probably make a lot more sense in your head and also it will make git just a little bit less confusing to new users.

More details about the two commands can be found in their git documentation:

All Comments: [-] | anchor

helltone(10000) 2 days ago [-]

What is a good resource to learn modern git properly? I have been using git for 10 years and get around with a really small set of old commands.

nerdponx(10000) 2 days ago [-]

Pro Git is excellent: https://git-scm.com/book/en/v2

quantumsequoia(10000) 2 days ago [-]

Nothing has been more intuitive than this interactive online tutorial


You lean in 30 minutes more than you learn in hours of reading documentation and experimenting

SiebenHeaven(10000) 2 days ago [-]

Ah yes, modern git for people using modern c++ If your plain old git works fine for you, there is no need to go looking for modern git IMHO.

raju(10000) 2 days ago [-]

Agreed with the other recommendation of Pro Git.

If you want to learn Git from the inside out, I wrote a two-parter that aims to explain Git from the inside out, focusing on the data-structure Git uses:



Finally, _if_ you have an O'Reilly subscription, I am currently writing Head First Git (first four chapters are in early release). If you are not familiar with the Head First series, its a rather unique format that involves using lot of pictures to explain ideas and traditionally the books move a lot slower than most technical books. Ideas/concepts are cemented using puzzles, quizzes, crosswords.

You can see a list of the existing ones here https://www.amazon.com/Head-First-Series-Books/b?ie=UTF8&nod...

Feel free to email me if you need any more resources—I have spent a lot of time teaching Git.

brundolf(10000) 1 day ago [-]

As with all successful software that's been around for a decade or two, git is too darn complicated.

To adapt the adage about democracy: git is the worst VCS, except for all the other ones.

niutech(10000) 1 day ago [-]

Try Gitless or Fossil - they are less complicated.

pjmlp(10000) 2 days ago [-]

It is ironic that Linus hates C++ so much, and then proceeds to create what is for all practical purposes, the C++ of source control systems.

tonetheman(10000) 2 days ago [-]

This should be the top comment!

vnorilo(10000) 2 days ago [-]

Complexity is like body odor: you generally don't mind your own.

chris_j(10000) 2 days ago [-]

What would you recommend using instead of git?

(Myself, I've been heard complaining that git is overly complex, but the source code control systems that I used to use before git include Subversion, CVS and various Rational products and I have no desire to go back to any of them.)

ylyn(10000) 2 days ago [-]

Git is very simple. It's the UI that is a bit messy.

C++ is by no means simple.

hnarn(10000) 2 days ago [-]

To be fair, if you mean that these changes are "bloat" and that git should be kept "simple" (not easy but non-complex), I don't think this has much to do with Linus, because as far as I know he's no longer involved in the development of git.

necheffa(10000) 2 days ago [-]

I don't know, have you ever had to use Clear Case? git seems rather lean in comparison.

thesuperbigfrog(10000) 2 days ago [-]

He was 'scratching his own itch' (see http://www.catb.org/esr/writings/homesteading/cathedral-baza...).

Git was written to meet the version control requirements of the Linux kernel. It works well for that project's needs which are an outlier for most development needs unless you are working at FAANG scale.

dboreham(10000) 2 days ago [-]

The Fortran of sccs surely?

some_developer(10000) 2 days ago [-]

I used to know lot of terminal commands but I'm seriously falling behind due to jetbrains integration which covers 99% of my daily use case. Together with local history, I've never 'lost' work in years.

Never heard of switch /restore and will probably forget about it the next time I'm on the terminal.

Rebase and interactive Rebase is so well integrated for my use cases and I feel much more productive without having to switch context.

Reflog and bisect are when I nowadays switch to the terminal, have not discovered an equivalent last time I checked.

600frogs(10000) 2 days ago [-]

I currently use a third-party git GUI (GitKraken atm) but I also use jetbrains products, am I missing out by not using the integrated git functionality? If so, is there a tutorial or guide you could recommend, or is it all fairly self-explanatory?

thecupisblue(10000) 2 days ago [-]

> but I'm seriously falling behind due to jetbrains integration

This IntelliJ integration is the source of quite a lot of git problems in teams I worked with.

I'm quite flabbergasted by this - devs claim to know git on their CV, come in and know what 'commit' is and how to use the IntelliJ UI, but don't even understand what its doing. And everyone is acting like it's OK and learning git is a 'hard thing ill never need' and we should all use sourcetree or jetbrains. Or people just get so used to it and never understand what exists below it. They lose all sense of what they're doing and just think 'the machine knows what I want'. Then a vaguely questioned dialog appears - or something similar - and they cause clusterfuck upon their branch - or sometimes even other people's remote branches.

How do we allow our culture to be so lazy that people resist using one of the basic tools because 'oh its hard I gotta remember 5 commands' and we find it OK? No wonder the plane is burning.

It's good that you still know that reflog exists, because a lot of 'inteliij is my git client' users don't even know about it. Tho I'm still wondering, isn't it faster/easier to open intelliJ terminal and type a command or two than having to ope a whole new window and click around it? (also sorry if this sounds like an attack on you, it isn't! just really wondering!)

Also, re: OP: So basically 2 new commands were added that do what other commands already do, but people dont read the docs so we should add new commands so maybe people will read the docs for them?

onionisafruit(10000) 2 days ago [-]

My conflict resolution skills have atrophied due to jetbrains. It's just so much easier than anything else I've tried. Even when I don't already have a project opened in idea, I'll open it just to resolve a merge conflict.

The rest I do on the command line because I don't want to forget — except commits because I'm not at risk of forgetting "git commit -m".

uses(10000) 2 days ago [-]

I use restore quite a lot but it kind of terrifies me that it can erase any amount of uncommitted work if I type something wrong. 'git restore .' is basically 'delete everything that I don't have a backup of'.

rochacon(10000) 1 day ago [-]

Add `-p` (`--patch`) and then pick from the diff prompt what parts you want to restore. You can also use this flag with commit.

lucb1e(10000) 2 days ago [-]

TL;DR because 'checkout' was found to be confusing, as of git 2.23 the switch command can switch to branches or commits (git switch master, git switch 0c38cf) and the restore command restores files (git restore pufferfish.txt).

cybersvenn(10000) 2 days ago [-]

Instead of one confusing command we now have three confusing commands.

hsn915(10000) 2 days ago [-]

I knew about `git switch` but this is the first time I hear about `git restore`.

It also never occurred to me that the checkout command was overloaded in that particular way.

thecupisblue(10000) 2 days ago [-]

Doesn't seem overloaded, seems quite fitting. Checkout a hash or a file from a hash. Switch kinda seems 'underloaded' in this way.

marcinzm(10000) 2 days ago [-]

Can someone tell me why they decided to have the new branch argument be `-c` in switch when it's `-b` in checkout?

Someone(10000) 2 days ago [-]

I would guess it is shorthand for —-create. A quick google confirms that (https://git-scm.com/docs/git-switch)

foxpurple(10000) 2 days ago [-]

Because c for create makes perfect sense while b does not make sense on a command which only works on branches.

chrisan(10000) 2 days ago [-]

maybe they regret the original `-b` and thought it should be -c for create as well?

clon(10000) 2 days ago [-]

Another difference is that with git checkout you can create and switch to the new branch in one command using the -b flag:

  git checkout -b new_branch
You can do the same with the new one, but the flag is -c:

  git switch -c new_branch
A good example why developers should not try to be designers. Even of talking about 'API/CLI design.
kzrdude(10000) 2 days ago [-]

Developers/designers are always torn between the competing priorities of preserving consistency and making a new interface the best it can be.

yakubin(10000) 2 days ago [-]

Apple and GNOME are even better examples of why designers should not try to be designers. Now what options do we have left?

rnestler(10000) 2 days ago [-]

-b in checkout is short for 'branch' while -c in switch is short for 'create'.

IMO the UI of git switch is much more intuitive, since the argument is always a branch and the default behavior is to switch to an existing branch. For slightly different behavior (like creating the branch first) there are flags.

So I think it's good that the flag for switch is a different one than for checkout, since the interface of git checkout was quite unintuitive IMO.

ahmedfromtunis(10000) 2 days ago [-]

The new command makes more sense to me, i.e. `switch` and `-c(reate)` new_branch.

With `checkout`, however, what does `-b` even mean? Branch?

That said, it'll sure take time before the majority of developers (including myself) get onboard with it.

Would a deprecation flag be a good idea for git?

CRConrad(10000) about 13 hours ago [-]

The complaint has been, for ages, that checkout got that wrong. As designers, these developers are improving: They got it more right on the second try.

alkonaut(10000) 1 day ago [-]

> with git checkout you can create and switch to the new branch in one command using the -b flag: git checkout -b new_branch

> You can do the same with the new one, but the flag is -c: git switch -c new_branch

It's like they had a design meeting where they discussed this and said 'so I propose switch -b newbranch to create and switch to a new branch' and the objection was 'nah that would make it consistent with checkout, which is against the project policy'

afiori(10000) about 22 hours ago [-]

That is probably what happened, with the observation that being consistent with checkout is a terrible idea for everything, as checkout is a clusterfuck of a command

ReFruity(10000) 1 day ago [-]

Well, maybe it indicates specifically that they had no meeting :)

notatoad(10000) 1 day ago [-]

i think they had a design meeting where they said 'lets add some new commands to make git easier for newbies'.

carrying over flags whose abbreviated forms don't make any sense in the new context doesn't make anything easier for anybody. if you want to keep using the commands you have memorized, you can do that - just don't use switch or restore. the new commands are different, that's the point.

eevilspock(10000) 1 day ago [-]

The purpose of `git switch` is to switch branches, and its normal argument is a branch ref. Thus -b for 'branch' would be redundant and confusing. -c is for 'create', i.e. create the branch that i want to switch to.

The whole point of adding git switch and git restore is to come up with more user friendly porcelain. Otherwise we might as well stick with git checkout. git checkout is maximally consistent with git checkout!

Vinnl(10000) 2 days ago [-]

It's not really new anymore, but still way underused, so it could certainly do with more attention. Git's UI has become better, but they can't really remove the old UI and tutorials using those, so people keep sticking to that.

makeitdouble(10000) 2 days ago [-]

These new commands make a lot more sense, but the weird thing is they don't bring anything else to the table. They behave exactly like the existing ones, so much that anyone that really cared could have just aliased them.

So is there any incentive to switch for the people who went through the trauma of burning the old ones in their soul ? (I often heard that knowing how it works internally makes git commands feel natural. I was lied to)

easygenes(10000) 2 days ago [-]

What would top recommendations be for tutorials using the newest UI?

Omin(10000) 2 days ago [-]

I can't reasonably start using such functionality until the PC with the oldest software that I still use has updated or I will have to deal with 2 ways of doing things all the time.

Currently, that's an ubuntu 18.04 machine at work and that doesn't have `git restore`, yet.

pcl(10000) 2 days ago [-]

It'd be interesting to add a config that disables the cruft for interactive terminals. I wonder if that could be a pathway towards deprecation.

agumonkey(10000) 2 days ago [-]

and it's a very general problem

distributing improvements after a large mass of old habits are spread around is something to be fixed

bicolao(10000) 2 days ago [-]

You don't need -- as much in the 'git restore' example. With git checkout it may be necessary to separate the branch and the paths with '--' but since 'git restore' does not take a branch (except with -s), doing this is totally fine

    git restore test.txt
411111111111111(10000) 2 days ago [-]

It's usually not necessary for checkout either

greatgib(10000) 2 days ago [-]

Thank you very much for this info.

From the article, I was thinking that it was again a stupid confusing design for the cli to requires the -- even with a dedicated command.

One main issue with git is to not be consistent and logic with the comments. Always to use different way or option abbreviation for different command. For example having a space or a slash between repo and a branch in a command.

mcs_(10000) 2 days ago [-]


I've been using the alias `gk='git checktout'` for years now.

Git has been my first terminal-oriented versioning control system. I've used TFS and other but never focused on the action name, just the icons.

Checkout branch/hash/file seems more potent than two different commands.

cerved(10000) 2 days ago [-]

I feel obliged to note that the version control of TFS (Team Foundation Server) is TFVC (Team Foundation Version Control)

kzrdude(10000) 2 days ago [-]

It is unfortunate that:

+ git switch is documented as 'EXPERIMENTAL'

+ git --help lists git switch but not git checkout as an important command

This is a documentation inconsistency. It can't be both the canonical interface to use and experimental at the same time.

bobbyi_settv(10000) 2 days ago [-]

It's not just --help.

When you checkout a specific commit and are now in detached HEAD state, you are by default given the message

    If you want to create a new branch to retain commits you create, you may
    do so (now or later) by using -c with the switch command. Example:
      git switch -c <new-branch-name>
    Or undo this operation with:
      git switch -
the_biot(10000) 2 days ago [-]

So send in a patch.

juped(10000) 2 days ago [-]

They're several years old, but Duy Nguyen who invented them hasn't been around that much recently, which is why they're still marked 'experimental'.

bicolao(10000) 1 day ago [-]

Well, I'm him :) I actually left Git. And it looks like nobody has picked it up since. So it's going to be forever 'experimental' [1] until either someone starts doing something, or deletes the whole thing

[1] The experimental status is not because it's unstable but rather to allow us (or now, them) to change the UI design based on feedback if we got it wrong (again!).

geenat(10000) 1 day ago [-]

Stupid since git checkout does the same things.

Git user since 2012 here.

CRConrad(10000) about 13 hours ago [-]

Sorry, you were saying 'git checkout is stupid since it does the same things', right?

(Only since 2014 myself.)

aryamaan(10000) 2 days ago [-]

I picked up git restore with git's suggestion itself.

Whenever I do git status, it tells me which files are changed and if I want to go back to their previous states, I can use got restore.

rnestler(10000) 2 days ago [-]

For some time I was pretty annoyed that git was showing the new suggestions, but for some reasons my git autocomplete did not know about them and thus couldn't tab complete. (It was on ArchLinux with zsh using the grml zsh config).

After a few month the autocomplete got updated as well and I could actually use the new interface without to much frustration.

Lordarminius(10000) 2 days ago [-]

As a helpful aside, in my experience, there are only about a dozen or so Git commands you need to do ninety percent of your work. You don't need to become a git zen master right away.

1. git init: to start a new repository

2. git status: checks your current state

3. git add -A: To begin tracking files

4. git commit -am: Commit all changes in the working directory with a message added on

5. git switch -c [branch name]: Create a branch and switch to it. (git checkout -b will do the same thing)

6. git switch [branch name]: switch between named branches

7. git merge[branch]: merge named branch into current branch

8. git branch [branch name] -D : delete branch if not tracked

9. git log --pretty=oneline: show a graph of commit history.

11. git push

12. git clone [repo]: Copy of a project/file onto your local computer.

The comments also contain some additional advice. Here is a good introductory video: https://www.youtube.com/watch?v=2sjqTHE0zok

y04nn(10000) 2 days ago [-]

I would add `git rebase -i`, beacause I usually develop on a local branch and rebase it to the updated one. With git things can get messy when you want do something outside of the basic stuff. What I hate the most is resolving 3 way merges.

cryptonector(10000) 1 day ago [-]

You're missing out. You need rebase and cherry-pick. Merging sucks.

samatman(10000) 2 days ago [-]

I'm mildly amused that your set of commands can't actually commit anything other than a brand-new file!

I will admit that I'm a lazy git user, and do most of my commits with `git commit -a`, rather than `-am` since I do try and give a short paragraph explaining the reasoning behind whatever the title message claims is the purpose of the commit.

I do run `git diff` first to see what I've changed, and if the diff has unrelated changes in different files I'll usually break it up into separate commits.

Decent introductory list, though. It won't surprise you that I think diff should be learned immediately; whether or not you need rebase depends on the conventions of the codebase, and if someone can learn it later they should, it can get tricky.

TobTobXX(10000) 2 days ago [-]

> 8. git branch [branchname] -D : delete branch if not tracked

Dangerous advice there. Use the lower-case '-d' option. It'll tell you that you maybe don't want to delete the branch when it has unmerged commits.

If you really want to delete the branch then, the output of `git branch -d` tells you this option.

dfabulich(10000) 2 days ago [-]

Odd that you put switch on this list but not restore. Undoing changes is pretty important.

talkingtab(10000) 2 days ago [-]

git restore, where have you been all my life?

lucb1e(10000) 2 days ago [-]

Nonexistent for the most part unless you were born recently :)

hrishi(10000) 2 days ago [-]

I have taken git apart and learned it three times now, and it all makes sense to me. The commands however, never clicked. The terminology never felt intuitive, nor predictably applied.

So I can explain how git works in great detail, but ask me how to perform an action I haven't in a month, and it's a lot like figuring our a tar command.

ExtraE(10000) 1 day ago [-]

Is there space in the world for an all new (v2) set of porcelain commands? (IMO yes)

tkuraku(10000) 2 days ago [-]

I think sublime merge is pretty close to the optimal git GUI, https://www.sublimemerge.com/. It uses standard git commands and terminology and allows you to add custom git commands to the GUI. Unlike Sourcetree is cross platform for Windows/Linux/Mac and includes a merge tool. I think command line git and Sublime Merge git translate back and pretty easily.

solarkraft(10000) 1 day ago [-]

I tried to use it and ended up back with Fork for some reason. I think basic tasks like merging were unnecessarily cumbersome and reliability wasn't perfect.

rattray(10000) 2 days ago [-]

For those wondering this is a separate application from Sublime Text, built by the same folks, and with the same free evaluation and $99 lifetime license (not sure how 'required' it is).

I haven't felt a need for a git gui but I might give this a try anyhow.

fermentation(10000) 2 days ago [-]

How do folks like this for reviews? I'm switching to git for a new job and the process is a bit overwhelming compared to mercurial

biryani_chicken(10000) 1 day ago [-]

I don't like Sublime Merge that much. Probably because I tried GitKraken first and loved it. It helped me understand stash and rebase better than the command line ever did. At some point they decided to not allow gratis use on private personal repos so I switched to VSCode with the Git Graph extension which gives me most of what I liked from GitKraken.

elpakal(10000) 2 days ago [-]

+1 on cli git and Sublime Merge. I use cli for everything except viewing diffs and I find Sublime to be great at that.

aetherspawn(10000) 2 days ago [-]

A lot of comments along the lines of 'why do people use <ide> instead of learning the commands'.

For me, I used to use the terminal git, and I still do occasionally. But I use Sourcetree now for most things because I make less mistakes seeing the tree visually all the time.

My job isn't to use git, it's to write specialist software. If I get the software written and the customer is happy, it doesn't matter whether I use <ide> or not. Imagine having 100 complex things bouncing around your head and having to make that 101 when you forget the order of arguments to merge.

The guy who knows every command of git backwards is welcome to apply for a job managing a git repo or something if such a thing exists? But I could harp on the same way about his missing MATLAB or firmware skills.

PaulDavisThe1st(10000) 2 days ago [-]

> My job isn't to use git, it's to write specialist software. If I get the software written and the customer is happy, it doesn't matter whether I use <ide> or not. Imagine having 100 complex things bouncing around your head and having to make that 101 when you forget the order of arguments to merge.

Imagine if you knew a cabinet builder who said:

'My job isn't to use a table saw, it's to build beautiful cabinets. If I get the cabinets built and the customer is happy, it doesn't matter whether I use a japanese handsaw or a CNC controlled laser. Imagine having 100 different pieces bouncing around around in your and having to then remember the assembly order.'

Now, you might argue that this supports your point, by claiming that it actually doesn't matter whether the carpenter uses a japanese handsaw, table saw or CNC laser cutter. But I'd argue the opposite: it does matter whether the carpenter knows the tools they use as well as possible, because this will affect both the quality & speed of their work, but also the range of possibilities they can even consider. It doesn't matter much which tool they use, as long as they know it intimately and in depth.

I would argue that the same is true of the tools we use as software developers. Pretending that all of the skill lies only in the domain of creating actual lines of code is misleading. If you're using a revision control system, you owe it to yourself, your customers and your colleagues (if any) to be a master of that in the same way that you're a master of the specialist software you're creating.

u801e(10000) 1 day ago [-]

> My job isn't to use git, it's to write specialist software.

Part of the job is to know and understand the tools that you need to use to in order perform the duties as part of that job. Saying that it isn't your job to use git is like a surgeon saying it's not their job to learn how to tie sutures when closing up the surgical site after completing the operation.

benglish11(10000) 2 days ago [-]

For me I've used visual git tools in the past and it ends up doing something unintended/unexpected. So now I only use the terminal commands. Perhaps the old git tools I've used have improved significantly though.

emsy(10000) 2 days ago [-]

I completely agree. It always baffles me how something so simple (yes, simple!) as version control can end up as the abomination that is git. After controlling for popularity, you don't see nearly as much posts explaining subversion in detail, and subversion being centralized is neither the only nor the biggest reason for that.

akamoonknight(10000) 2 days ago [-]

I will agree that visually seeing the tree is such a useful tool to have access to. I know that's not the true desire of your use case, but in case it's useful, I will add what is obviously the best git alias, 'git lg': https://coderwall.com/p/euwpig/a-better-git-log

git lg --all is probably my most used command in terminals and I think it gives me a better view of how projects are flowing on the whole.

alerighi(10000) 1 day ago [-]

You don't even need git at that point. I don't understand why using git if you want a GUI... at this point, put the source code on the company fileserver and adopt 'zip versioning', i.e. when you do a new release create an archive named 'project-X.Y.z.zip' and archive it on the fileserver. If you need to work on another branch copy the source code directory. Why bother at this point?

I don't understand people that wants to use git but they want do to so with a GUI that abstract everything that git was created to address, and they limit themself to write some code, commit and push. You are not gaining any benefit in using git this way, you are only wasting time to me.

If you choose to adopt git, you learn how to use it, and so you learn the commands (it's not that much effort). In my experience GUI always created problems, especially if someone in a team uses a GUI that creates junk in the repository (like 100 useless merge commits created automatically for things that shouldn't really have been a merge that make git log unreadable...).

Also people that uses GUI typically when they have a problem that the GUI doesn't know how to solve (because they typically implement the basic things and if something goes wrong they can't help you) just deletes the whole repo and clones it again, or worse they try to fix it by pressing random buttons in the GUI and put the repo in a shitty state so another coworker that knows how to use git has to waste his time cleaning up the crap that the fantastic git GUI made.

And I'm not saying that you shouldn't 100% use GUIs. I use the one of VSCode for doing simple things like creating commits, switching branches, and stuff like that. For advanced features like merge, cherry pick, rebase, whatever I use the CLI, I find it more practical.

snarfy(10000) 2 days ago [-]

We are all using git only because Linus wrote it. The cargo cult is real and very much alive in our industry, I think precisely because we are all here to write specialist software. Too busy in our domain to worry about version control nuances so we just go with what is popular and don't think about it too much. It's not just version control, it's libraries, frameworks, languages, all of it. If it's not popular it's doomed to failure.

oauea(10000) 2 days ago [-]

All visual git tools I've used suck and wound up eventually corrupting the repo. Also I've noticed that all of my colleagues who learned git using these visual tools didn't actually learn git, and have no idea how to anything other than add/commit/push.

I say 'just rebase your branch' and I can see the panic grow in their eyes.

intellix(10000) 2 days ago [-]

I wouldn't say I'm an expert but I've got about 10 years experience using git via CLI and whenever a noob does something weird and he's using an IDE I'm like... Sorry I have zero idea what this is trying to do and cannot help you

0xEFF(10000) 2 days ago [-]

I recently tried to help someone onboard into a cloud project that requires git tunneling due to security policies.

While they had experience with their IDE of choice and git, they were ultimately unable to push any changes.

OJFord(10000) 1 day ago [-]

> [I don't use git CLI so much any more] because I make less mistakes seeing the tree visually all the time.

I look at the tree visually so frequently it must be my most common command. In the CLI.

    log --graph --decorate --pretty=oneline --abbrev-commit
And then optionally adding whatever else, `--all` most frequently. (Obviously not writing it all out every time - with git config aliases, and actually `gitl` as a shell alias for that even. That's probably up there in my top.. 5? shell commands.)

GUIs work too of course. Just pointing out you don't have to abandon the CLI for a tree. There's fancier third-party tools than native git log that are still CLI even.

mattrighetti(10000) 1 day ago [-]

Using git from cli is like driving a Ferrari in a road with 30km/h speed limit

mberning(10000) 2 days ago [-]

Absolutely. I can't wait until something with better ux comes along and gets enough traction to make git a distant memory. I do not want to know the detailed inner workings of my VCS data model or 100 incongruent commands to make it work.

bob1029(10000) 2 days ago [-]

> My job isn't to use git, it's to write specialist software.

This is true on so many other levels too. My job isn't to be an AWS expert, VIM master, Visual Studio ninja, Unix professor, et. al.

My job is to make the customer happy. That is it. If the customer is happy, my project managers are happy, the executives are happy, the investors are happy. When all of your bosses are happy, you can get away with absolute murder. No one gives you shit about anything. Production went down because you fucked up? No big deal - that was like the first time in 18 months we had any problems, and the customer can't even see these things through all the magical features get to play with day-to-day. Need to take the entire afternoon to play overwatch because [arbitrary fuck you reason abc]? No one cares as long as you didn't have a scheduled meeting. In this realm, your mind is free to explore side projects without fear of reproach or guilt-trip. Tasks are executed with confidence and calm. Innovations are more frequent and valuable. People are actually relaxing in their time off and enjoy working for their employer.

When the customer is pissed off, it is like entering into Doom Eternal as a non-player character. At every turn you begin to anticipate a heated conversation about missed target XYZ and incident ABC. Each ding of your outlook bumps your blood pressure by 20-30% before you even see the subject line. Your executives start taking damage from your customer's executives. Investors begin executing difficult queries regarding long-term viability. NO one is sleeping anymore. Side projects? Are you fucking kidding me? Not in this hell.

So, when someone in my organization starts giving me the run-around about [pedantic greybeard doctrine which adds 10x overhead to a process], and also has no business value to show for said run-around, I begin to shut things down pretty quickly. If you want to play nuclear release authorization simulator every time you need to check in source code, please do this on your own time. Even the most elite hacker rockstars like to use GUI tools so they can see what the fuck is going on without making their eyes bleed every 10-15 minutes due to terminal character display restrictions.

jupp0r(10000) 2 days ago [-]

Having been through two Perforce -> git transitions of medium sized repos with a few dozen people contributing and being the person with the most git knowledge in the group to be called in when people new to git mess things up: these GUI git clients are ok if you know what you are doing and what the consequences of checking various checkboxes are. They are not conducive to people learning how the tool git works and how to use it to solve real world problems. The command line is a great way to learn git and then fundamental understanding can be used to reverse engineer what GUIs do under the hood.

tryingtogetback(10000) 2 days ago [-]

For me, as a dentist, I used to use the dental drill, but now I trust my janitor to handle it, this way I make less mistakes myself

My job as a dentist isn't to use dental drill, it's to fix teeth in general. If I managed to fix a tooth and customer is happy, it doesn't matter whether I use drill myself or janitor does. Imagine having 100 complex things bouncing around your head and having to make that 101 when you forget the order of drill bits you need for a root canal.

The guy who knows dental drilling backwards is welcome to apply for a job managing dental drills or something if such a thing exists? But I could harp on the same way about his missing medical or braces-training skills.

Shorel(10000) 2 days ago [-]

A couple of easily added aliases to .gitconfig and my CLI can do everything your GUI can do, in a portable way, and much easier and faster IMO.

Nothing to learn or even forget, either, as the CLI is actually the easier part of the job.

So, you do you, nothing wrong with that, but the CLI is here to stay.

emodendroket(10000) 2 days ago [-]

I find the visual ones often harder to use but that's just me. Whatever works works.

MisterBastahrd(10000) 1 day ago [-]

Wonder how many command line enthusiasts are using gitflow to make things even simpler than sourcetree.

neop1x(10000) 1 day ago [-]

>> My job isn't to use git, it's to write specialist software.

It's like a plumber complaining that his job isn't driving with a car and that he wants customers to pick him up or wait for him until he comes on foot or via public transport.

madeofpalk(10000) 2 days ago [-]

At the end, it comes down to a personal preference of what you're most comfortable with. Some will prefer to use GUIs, others will prefer the command line.

Personally, I really enjoy using both the command line and the Github app. The Github app is super simple and straight forward, its great for just committing (parts of) files. Anything more than that and I prefer using the command line for 'direct control'.

Arch-TK(10000) 1 day ago [-]

So putting aside your argument that I completely disagree with but a lot of people have already voiced my concerns.

> The guy who knows every command of git backwards is welcome to apply for a job managing a git repo or something if such a thing exists?

Yes, this is the job of a maintainer in fact. They exist in a variety of organisations but maybe not enough. The best example is the linux kernel. Developers are expected to maintain their own local tree. When it comes to contributing code to the kernel, the patches are sent in a standardised manner to a mailing list and a maintainer then handles dealing with branches, rebases and merges. This means that developers don't need to know any more git than they really want to learn, aside from how to use git-format-patch and git-send-email which are really quite simple tools with an incredibly vast number of tutorials out there explaining them.

This means that people who insist that it's 'not their job' to learn git can achieve the requirements of 'patches which build at every step and contain isolated step by step changes' using a GUI or doing something really stupid like copying the code aside, deleting and re-cloning the repository and then pasting and committing each step. It also means that people who actually know how to use git can get the job done in a fraction of the time.

It also means that a carpenter^Wdeveloper's insistence to not learn how to use a claw hammer^W^W^Wgit will not affect their fellow coworkers/cocontributors.

nerfbatplz(10000) 2 days ago [-]

Except for 99% of all git day to day tasks are done with like 7 commands.

Git commit

Git checkout

Git merge

Git pull

Git push

Git rebase

Git stash

I can't remember I needed a command that wasn't one of those and I exclusively use the cli.

agilob(10000) 2 days ago [-]

I expect other programmers to learn and master tools they use 50 times per day. You should be getting more efficient, productive and make fewer mistakes with languages, frameworks and tools you use daily. If you prefer sourcetree, be it, but I expect you to use it efficiently and not make mistakes other people wouldn't do with git, zsh and ohmyzsh (which contains like 100s of handy shortcuts).

magoon(10000) 2 days ago [-]

Commands are explicit and shareable. If you've mastered <ide> then good on you, but it makes you an island.

smusamashah(10000) 1 day ago [-]

If you understand how git works, its data structure essentially, then its far easier to do anything with IDE/GUI instead of CLI. They are more intuitive and shorten the work and less prone to mistakes.

gumby(10000) 2 days ago [-]

I don't completely agree.

> My job isn't to use git, it's to write specialist software.

I'm a big fan of automation, but there are certain fundamental tools I think one needs to understand to do the job. Both because you should have some idea what the automation is there to accomplish and also to get yourself out of a pickle when something goes wrong (or to even recognize when that happens!).

So, as I think most people would agree: you certainly don't need to understand the obscure corners of your programming language, but you should have a solid understanding of the fundamentals and a decent overview of the rest.

In the case of source control, and git in particular, IMHO you should have a decent fundamental understanding (which isn't even particularly complex at a conceptual level) so even if you don't remember the command for 'X' , you'll know to look for it when you do need it.

Given how you started your comment, perhaps you don't even agree with implication of the sentence I quoted.

(edit: added IMHO)

crazygringo(10000) 2 days ago [-]

Seriously, seeing the commit tree laid out with colored lines is essential to me. A glance at the interface lets me know exactly what state the repository is in. Just like you say, it's less mistakes. Which is precisely one of the benefits of good UX.

Going from SourceTree back to the command line would be a huge step backwards for me. I still use the command line sometimes because there's advanced stuff SourceTree can't do. But for most of my basic everyday operations, the command line is just inviting me to make little accidental mistakes every so often because the state of the repository and branches isn't obvious at a glance.

I only see upside to using an IDE, zero downside. (I've never had SourceTree 'corrupt' my repository, and all its commands do exactly what I expect -- it's just running the git commands I'd be typing out anyways.)

tkfu(10000) 2 days ago [-]

This reminds me of the old xkcd [1] about how standards proliferate...

Situation: Git has 137 difficult and unintuitive subcommands [2], and new users can't keep straight which ones they should use.

'Oh man, that's awful, let's add new subcommands that are clear, and do just one thing well!'

Soon: Situation: Git's CLI has 138 difficult and unintuitive subcommands.

[1] <https://xkcd.com/927/>

[2] Yup, seriously, as of 2.32. I checked.

5e92cb50239222b(10000) 2 days ago [-]

Most of those are plumbing. They're only needed if you're building tools on top of git (integration with IDEs or custom GUI, for example), or doing very advanced scripting/broken repository repair.

ninkendo(10000) 2 days ago [-]

Except this is actually the opposite... they turned one command that did too many things into two commands which each do fewer things.

nine_k(10000) 2 days ago [-]

Once upon a time, someone decided to overload `checkout` with a bunch of semi-related actions, apparently for convenience. The day these patches were accepted was a sad day.

It's great to see that someone took time to restore sanity. I'll switch to these commands now.

stepanhruda(10000) 2 days ago [-]

'switch' to these commands, hehe

layoutIfNeeded(10000) 2 days ago [-]

No thanks, I will keep using checkout.

leephillips(10000) 2 days ago [-]

Me too. The checkout command makes sense and seems consistent to me: check out branches or files. Why do I need two other different commands for this?

biryani_chicken(10000) 1 day ago [-]

I remember learning `git checkout` checked out files from a specific branch and the default behavior was to checkout all the files of the current branch and it made sense except for one thing. If I check out one file from another branch, HEAD is still pointing to my current branch, if I check out all but one file from another branch it's the same, but if I check out all files, then HEAD points to the other branch and that seems inconsistent. I always thought there should be one command that switched branches and then checkout changed the files.

CRConrad(10000) about 13 hours ago [-]

But in your terms, a branch is the files it contains. Or rather: Changes gathered in a commit can be changes to several files. A branch is just a chain of commits, each based on the previous ones. So what a branch 'is', is a bunch of (bunches of) changes to one or more files. Therefore, 'checking out a branch' is checking out (a bunch of changes to) one or more files. And now you want to 'check out one file from another branch'... Is it really any wonder that doesn't make much sense?

I think your doing yourself a disservice by even thinking in terms of 'checking out a file' as separate from checking out a branch. The units git deals in are commits and branches, not really individual files as such. If you want to use it, better get used to thinking in the same units it does.

hkopp(10000) 2 days ago [-]

My git productivity hack is `git diff --color-words`. Instead of showing the line-by-line diff, it shows only the words that changed. Especially useful if you have long sentences where only a comma changed or some other typo. With git diff, the two lines are shown, with --color-words, only the changed symbol is highlighted. The option --color-words also works with git show. I even made aliases for them: git cshow and git cdiff.

Other than that, I recommend that people learn to use git properly. In my work, I often have problems with people overwriting their commits and trying to handle merge requests of commits where one commit message is 'did some updates' and the other commit is 'some fixes'. Getting to know git for an hour, may have prevented both issues. But I am biased, since I use git since my bachelor thesis.

levzettelin(10000) 2 days ago [-]

Looking at diffs on the command-line is cute and all, but for anything substantial I doubt this will ever be as good as using a proper GUI interface. I like 'meld'. You have to install it, then run

  git config --global alias.meld '!git difftool -t meld --dir-diff'
and after that you can do:

  git meld                   # like 'git diff'
  git meld --staged          # like 'git diff --staged'
  git meld branchA branchB   # like 'git diff branchA branchB'
cush(10000) 2 days ago [-]

> properly

Or maybe just don't shame people for not working the way you do.

mtekman(10000) 2 days ago [-]

In a similar manner for Emacs users: magit-toggle-refine-hunk is a lifesaver for word diffs

NilsIRL(10000) 2 days ago [-]

I also like to use `git diff --color-words=.` to do a character wise diff instead.

leephillips(10000) 2 days ago [-]

Thanks for teaching me about --color-words. I didn't know about it, and I greatly prefer it over the normal git diff output.

lisper(10000) 2 days ago [-]

> Other than that, I recommend that people learn to use git properly.

Sorry to be harsh here, but that is completely useless advice. It's a tautology. Of course people should learn to use git 'properly'. What's the alternative, that they should learn to use it improperly? Everyone should learn to use everything properly. It's like telling someone dealing with a crisis that they should 'take appropriate action', as if taking inappropriate action was something that someone would actually seriously consider absent this advice.

The problem is that no one knows what 'properly' means when it comes to git. Git itself provides no clue, and everyone and their second cousin has an opinion. That makes the advice to use git 'properly' utterly vacuous. Figuring out what 'properly' means is the whole problem with git.

[UPDATE] See also my earlier comment here:


imiric(10000) 2 days ago [-]

I highly suggest delta[1] for viewing diffs on the command line. It pretty much replicates GitHub's diff rendering, and is quite configurable.

[1]: https://github.com/dandavison/delta

rubyn00bie(10000) 2 days ago [-]

I'm surprised no reference to:

git switch -

For going back to the previous branch you checked out.

hsn915(10000) 2 days ago [-]

This is not new. It works the same with the checkout command.

    git checkout -
akx(10000) 2 days ago [-]

These features were introduced in 2019 (and have been featured in git's help texts since).

Serious question - Why is it that they're called new two years later? Is it that no one cares to read git's prompts or release notes?

erik_seaberg(10000) 1 day ago [-]

It might take five years for a change like this to show up on every box you log into, depending on how much your org relies on LTS releases to avoid randomly breaking stuff.

jonwinstanley(10000) 2 days ago [-]

The author did say that had only recently discovered these commands. Unsure why they said they were new, maybe they were kidding?

Historical Discussions: Cloudflare's inaccessible browser contradicts the company's mission (July 30, 2021: 739 points)

(748) Cloudflare's inaccessible browser contradicts the company's mission

748 points 4 days ago by mwcampbell in 10000th position

mwcampbell.github.io | Estimated reading time – 4 minutes | comments | anchor

Cloudflare's inaccessible browser contradicts the company's mission

An open letter to Matthew Prince, CEO, Cloudflare

by Matt Campbell July 30, 2021

Mr. Prince:

About four months ago, Cloudflare launched Browser Isolation during Security Week, without ensuring that the product is accessible to blind people using screen readers. Now, four months later, this problem is still not solved. Cloudflare's neglect of accessibility in this product raises a new barrier to employment for blind people as companies adopt this product to improve the security of their networks.

As I'm sure you recall, I first contacted you about this problem a year and a half ago, when Cloudflare first pre-announced the development of Browser Isolation, because I knew based on the technical description in that blog post that this product would be completely inaccessible with a screen reader unless the team specifically took steps to make it accessible. I even offered deep technical advice on how to make this product fully accessible, based on my expertise in accessibility and many years of experience as a developer of assistive technology. You were quick to respond back then, assuring me that the team was taking accessibility into account. In particular, you wrote the following, referring to your company's mission: "Agree with you that it's critical we not take a step backward as we're working toward building a better Internet."

Now, as your Impact Week draws to a close, I'm writing to remind you of what you told me last year. Throughout this week, Cloudflare has promoted many activities that contribute to its mission. There was even a blog post about Flarability, the employee resource group for disabled Cloudflare employees. But despite all of that, Cloudflare's neglect of accessibility in the Browser Isolation product stands as a contradiction of the company's mission.

I mentioned above that the inaccessibility of Browser Isolation raises a new barrier to employment for blind people. This isn't an exaggeration. A blind acquaintance of mine once lost his job because of a newly added requirement that he use an inaccessible application. Now, imagine that a company's IT department has mandated the use of Browser Isolation for the sake of security. Any blind employees or contractors at that company would immediately be unable to browse all websites that haven't been specifically exempted from this security measure. Sure, the company might choose to make an exception for a small number of people. But now suppose that a blind person is interviewing, and they're well-qualified for the job, but the company has already rolled out Browser Isolation. It's not too much of a stretch to suppose that the company would simply decide to pass on that candidate. Of course, it's not Cloudflare's fault if some of its customers wrongly discriminate against employees or candidates. But Cloudflare surely has a responsibility to make its products as accessible to as many users as possible, given your mission and the potentially high stakes of neglecting accessibility in this product in particular.

In light of all of that, I'm calling on you to immediately prioritize the necessary work to make Cloudflare's Browser Isolation product accessible to blind people. I believe I have been patient; remember, I first contacted you about this roughly 18 months ago. But in the past 4 months in particular, despite my repeated emails to the product manager for Browser Isolation, my requests for updates on this problem have been ignored. That's why I'm writing this as an open letter. It's past time for the Browser Isolation product team to live up to the mission of Cloudflare by making this product fully accessible. I look forward to your prompt reply.

Sincerely, Matt Campbell

All Comments: [-] | anchor

fouc(10000) 1 day ago [-]

While we're on the topic, I have a question:

Are there large companies that have deliberately made their products less accessible to those with disabilities, because they're completely hostile to scrapers (and the open web)?

Is it even possible to make a completely closed/hard-to-scrape web app that is still 100% accessible to the blind?

mwcampbell(10000) 1 day ago [-]

> Are there large companies that have deliberately made their products less accessible to those with disabilities, because they're completely hostile to scrapers (and the open web)?

I'm not aware of any, but that doesn't mean they're not out there, in some specialized niche perhaps.

> Is it even possible to make a completely closed/hard-to-scrape web app that is still 100% accessible to the blind?

I would guess not. The headless browsers that scrapers are already using could be extended to expose their accessibility tree to scripts as well as the DOM.

lbriner(10000) 1 day ago [-]

Sad but typical and not just from big 'evil' companies (not suggesting that CF is!)

I just ran Jekyll to migrate my Blogger blog to self-hosted and with the default importer and default theme, I clicked the Web Accessibility button and immediately got some several hundred contrast errors (lots of blog post links) and some incorrect heading levels. Just basics but people are too unaware of accessibility requirements that this even happens before a release.

What is missing? Is there not an online checker like w3c does for markup or acid does for browser tests? Oh yes, it is here: https://wave.webaim.org/ and there is also a browser plugin so no real excuses.

arp242(10000) 1 day ago [-]

I don't know what you did exactly, but the default Jekyll theme is fairly simple black-on-white and doesn't seem to have any major issues from quick spot-check.

I think it may be an issue with your import(?)

ipv6ipv4(10000) 1 day ago [-]

Accessibility is important. Four months is not a lot of time for any large scale software project with a large team. Every little feature takes time. Why is it assumed that Cloudflare is not working on accessibility?

mwcampbell(10000) 1 day ago [-]

OK, I shouldn't have led with that number.

If I had first told them about the problem 4 months ago, then of course it would be too soon to expect a solution. But as I wrote further down in the OP, I first contacted them about this roughly 18 months ago. As far as I can tell, they haven't done anything about it in all that time. And though I was previously in contact with the product manager, it has been 4 months since the last time he wrote to me, despite my recurring requests for an update.

Also to be clear, I didn't choose to escalate this now because it has been 4 months. The trigger was Cloudflare's 'Impact Week', which included a blog post about their Flarability employee resource group.

haser_au(10000) 1 day ago [-]

The open letter states the author contacted Matthew Prince (CEO, Cloudflare) 18 months ago, and received a response;

'...assuring me that the team was taking accessibility into account. In particular, you wrote the following, referring to your company's mission: "Agree with you that it's critical we not take a step backward as we're working toward building a better Internet."

eastdakota(10000) 1 day ago [-]

This has been prioritized since long before Matt emailed me. It was specifically flagged during our diligence process of S2 Systems, the company we acquired for the Remote Browser Isolation (RBI) technology. It has been an engineering project that I have personally followed since we acquired S2 nearly two year ago.

Unfortunately, this has proved a non-trivial problem to solve, in spite of significant engineering resources dedicated to it, and we don't yet have an acceptable solution. But I'm confident we're on the right track.

The challenge is that the process of rendering content inert to local security threats also makes it also not compatible with current screen reader technology. Matt has helpfully suggested some ideas which are in-line with what we have been working on, but the diversity of the web makes the solution very complex in practice. While I appreciate his suggestion in this thread that if we would just hire him this could be fixed in a few months, I think he would acknowledge upon reflection that is flippant.

How the web is rendered and the diversity of web pages, especially dynamically updated pages, makes many solutions that seem obvious not tenable. We need to validate the solution we deliver will work across all the complexities of the web and across a broad range of accessibility devices while, at the same time, not introducing new threats. We already have a great team doing this work. RBI is still a new product for us, and it's only been recently that we've gotten the core technology to work to a level that's acceptable, but I'm confident with the work we're doing we will be the first RBI technology in the market with broad accessibility support.

In the meantime, we provide our customers a way to bypass the RBI technology to accommodate their visually impaired employees. In these cases, we recommend that additional safeguards be put in place for these employees' machines to guard against potential security compromise. This isn't a perfect solution, but it does help significantly reduce the surface area of attack while allowing visually impaired employees to do their jobs.

I hope that others in the space with similar technologies — including Mighty, Menlo Security, zScaler, and others — will also dedicate the resources needed to make their products as accessible as possible. Matt is right to call on the industry to prioritize the needs of visually impaired users. As we solve these challenging problems ourselves, we will share what we've learned, how we overcame challenges, and we will not do anything to restrict the intellectual property behind the solutions so the entire industry can benefit.

As for the rest of the discussion in this thread, I agree that Cloudflare is fundamentally in the trust business. It takes 5 minutes to sign up for Cloudflare, but only seconds to leave. We need to earn the trust of our customers, as well as Internet users in general, on a daily basis or we won't have a business. Appreciate everyone holding us accountable to that.

mwcampbell(10000) 1 day ago [-]

Thank you for your response, on the weekend no less. However:

> if we would just hire him this could be fixed in a few months

That's not quite what I said. Here's what I actually wrote:


> For the specific project of making this remote browser accessible, my wild guess is that if Cloudflare were to hire me to work on the project (no, not available at the moment), it could easily take a few months, but probably not more than a year. They could probably cut down that time if they hired away someone from the Chrome or Edge team who's actually an expert on Chromium accessibility specifically; I admit my main expertise is in Windows accessibility.

And of course it's possible that even what I wrote there is too optimistic.

I'm sorry I was unclear. What I meant was that I could see the project easily taking at least a few months, and maybe up to a year, but likely not more than a year.

Also, the intent of that comment was to give my answer to a question about how big a project this would be, not to suggest that Cloudflare should 'just' hire me. I even suggested that I wouldn't necessarily be the best person for the job.

mwcampbell(10000) 1 day ago [-]

My other reply, posted under your other copy of your response: https://news.ycombinator.com/edit?id=28032491

miki123211(10000) 1 day ago [-]

This problem unfortunately applies to a lot of remote access software, particularly when the web browser is the client.

I know of one company that switched to Web VNC for accessing a specific piece of software. They had a lot of offices and the software was expensive (paid per machine). This way, they could switch to a much smaller number of licenses, letting any employee connect from anywhere and wait in line if necessary. A blind person has lost a job over this.

digitallyfree(10000) 1 day ago [-]

I'm not sure if remote access programs (web browser or not) even support screen readers on the client, especially since many of those render the entire desktop server-side and send it back to the client as an image or video. A possible option may be to run the screen reader on the remote desktop itself if that's possible.

yjftsjthsd-h(10000) 1 day ago [-]

> A blind person has lost a job over this.

IANAL, but in at least the US and Europe that sounds like the easiest lawsuit of their life

sokoloff(10000) 2 days ago [-]

> A blind acquaintance of mine once lost his job because of a newly added requirement that he use an inaccessible application.

I find it hard to believe this happened as stated in the US, where any number of lawyers would be eager to take such an open-and-shut ADA violation case.

hobs(10000) 1 day ago [-]

There are constant and flagrant ADA violations - while the lobbying group is not weak the war of attrition is definitely with the employers not the ADA; I have seen so many violations it makes my head spin.

jnovek(10000) 1 day ago [-]

I have been fired because an employer was unwilling to negotiate reasonable accommodation per the ADA. Tech is insular. I chose to make zero noise for the sake of the future of my career and I wouldn't be surprised if most do the same.

WORMS_EAT_WORMS(10000) 2 days ago [-]

No doubt it could happen but I agree with you. This entire post is very odd and makes absolutely no sense at all.

mwcampbell(10000) 2 days ago [-]

Here are the two (edit: three) public blog posts I could find from this guy. I'll let you decide whether I misrepresented what happened.



Edit: Found the original announcement: https://blindaccessjournal.com/2006/02/my-job-lost-due-to-in...

And yes, it was in 2006. And as it happens, his employer rehired him shortly after, but only because they found something else for him to do. I believe my point still stands; for a short time, he lost his job, without knowing what happened next, and he went through the emotions associated with that.

brudgers(10000) 2 days ago [-]

ADA is Federal Law. It provides no damages. No attorney fees. The USDOJ is the plaintiff. Fines are imposed.

California Law is different in that it is like other civil laws with damages and attorney fees.

Consequently, cases from California make attention commanding headlines. Elsewhere in the US, citizens must beseech the USDOJ to act on their behalf...it usually doesn't.

kwdc(10000) 1 day ago [-]

People leave jobs for all kinds of abuse and never take legal action. Other people create legal but unethical workplaces. Still others create just plain nasty work environments. The legal system isn't that great for sorting these messes out and plenty of poeple know it.

Animats(10000) 1 day ago [-]

We're probably headed for a world in which everything is rendered to an image server-side. The HTML/CSS/Javascript mess has become so bloated and attack-ridden that sending images needs less bandwidth and is simpler.

rossmohax(10000) 1 day ago [-]

Reinventing X Server protocol?

Jaxkr(10000) 1 day ago [-]

God I hope you're wrong.

mwcampbell(10000) 1 day ago [-]

That wouldn't be so bad if the server sent down a tree of semantic UI elements, a.k.a. an accessibility tree, along with that image. That's basically what I advised Cloudflare to do ~18 months ago.

5faulker(10000) 1 day ago [-]

Interesting. For images with few colors, manually optimized PNG can work better than WebP.

cxr(10000) 1 day ago [-]

> Their "client" was basically a fancy, highly specialized graphics terminal; all the real work was done on the server. For example, when you issued a command to an object, instead of sending a command message to the object on the server, the client would send the X-Y coordinates of your mouse click. The server would then render its own copy of the scene into an internal buffer to figure out what object you had clicked on.


sneak(10000) 2 days ago [-]

This makes logical sense. Smaller companies have fewer innovation tokens; large organizations like Cloudflare carry heavier burdens when releasing new products (i18n and a11y primarily among them).

devoutsalsa(10000) 2 days ago [-]

It seems like Cloudflare could embrace accessibility and use that in marketing as a competitive advantage.

nonbirithm(10000) 2 days ago [-]

Anecdotally, even with websites like Twitter that obfuscate their CSS class names to prevent the use of selective adblock, they still leave the readable ARIA strings in predictable places, allowing uBlock Origin users to create blacklist rules matching them. I'm wondering if those two features are at odds.

novok(10000) 1 day ago [-]

You can do ad block with text in tag types I've found out. I use it to block the email nag from reddit.

wolfgang42(10000) 2 days ago [-]

Do we know that Twitter is intentionally doing that to defeat adblockers? It's a common speculation I see about them (and maybe it's a convenient side-effect), but these sorts of mangled class names are also a common feature of popular CSS-in-JS libraries. (I work on an internal app that does the same thing, and it's incredibly annoying but definitely not explicitly intended to be hostile.)

madjam002(10000) 2 days ago [-]

Twitter uses react-native-web which generates random class names, they're not doing it to evade ad blockers.

MattGaiser(10000) 1 day ago [-]

For people who have worked on accessibility related stuff in production projects, how much more expensive is it vs just ignoring it?

SimianLogic2(10000) 1 day ago [-]

We usually quote 50-100% increase for agency web dev stuff (mostly marketing sites) and I'd say we've underestimated a few times. For basic html layouts it's not too bad but the minute you move away from something that looks like Craigslist or Wikipedia stuff starts to get hard. We've used 3rd party consultants to do reviews and every reviewer picks out different problems on the same exact site. I've implemented consultant recommendations line for line only to have that code flagged by a different reviewer at the same company as non-compliant.

BoorishBears(10000) 1 day ago [-]

Does it matter? Tomorrow morning you can wake up needing those accessibility features.

grishka(10000) 1 day ago [-]

I did screenreader support in a rather popular Android app. It took me several days to get from 'can't focus anything at all on the main screen' to 'all icon buttons are labeled and most of the functionality is usable, including the many very complex custom views with clickable elements inside'.

daviddever23box(10000) 2 days ago [-]

Why not push the screen reader component upstream?

It'd be another service add-on, but it might also be useful for folks who want to have narrative browsing, e.g., the equivalent of someone reading the news sites to the listener without having to interact with the site itself.

marcinzm(10000) 1 day ago [-]

A screen reader is a two way device since it needs to expose ways to INTERACT with the site and not just read it. I assume there's many different settings for screen readers including voices, speed, ways of interaction with site elements (click, voice command, shortcuts, etc.), etc. It'd be like forcing you to use IE 6 to browse the modern web and then if you're not as efficient as someone on modern Chrome firing you.

mwcampbell(10000) 2 days ago [-]

> Why not push the screen reader component upstream?

Are you suggesting that a screen reader should run on the same remote machine as the remote browser and push its audio down to the client? Or something else?

devwastaken(10000) 2 days ago [-]

Public services, even online, which are not accessible to those with major disabilities, is a violation of the ADA. https://youtu.be/IQjUCqVo4II

This may apply in other ways to Cloudflare, and if so fines must be issued. It's 2021, there's no excuses for it other than not wanting to put in the work.

ceejayoz(10000) 2 days ago [-]

The fines would apply to the companies using CloudFlare, wouldn't they?

ggreer(10000) 1 day ago [-]

By that logic, isn't every screen sharing app violating the ADA? A screen reader can't read the text on someone else's screen in Zoom, Webex, Slack, etc. Zoom even admits to this in their accessibility FAQ and encourages speakers to supplement with notes.[1]

1. https://zoom.us/accessibility/faq#faq11

gnicholas(10000) 2 days ago [-]

They wouldn't be the first. An SVP of a major SV company once told me '[my company] doesn't give a shit about accessibility, and no one in Silicon Valley does.' When I went to the CSUN accessibility conference that year, guess which company's logo was emblazoned across the lanyards? Yup, their marketing department was happy to write checks that their company had no intention of cashing.

Silicon Valley is famous for its 'patina of accessibility': https://medium.com/@nicklum/silicon-valleys-patina-of-access...

mwcampbell(10000) 2 days ago [-]

I understand and can relate to the feeling that nobody gives a shit. And it may be true that the leadership of all of these companies only care about the bottom line. But let's not make things look worse than they are. Whatever the motive, some SV companies are doing good work in accessibility. The most obvious example is Apple; the introduction of VoiceOver on the iPhone in 2009 was groundbreaking and has been tremendously useful to blind people all over the world. Microsoft (disclosure: my former employer) is also doing good work on accessibility, e.g. its Seeing AI app. Of course, we have constructive criticism for these companies as well, but the state of accessibility in mainstream tech is not all bad.

akagusu(10000) 1 day ago [-]

Why people are still using and promoting Cloudflare when the company is repeatedly trying to position itself as an internet gatekeeper?

There is already a consensus that internet gatekeeping is bad for people, so why people are volunteering for this?

This company already has a tremendous control over what people can or cannot see on internet since a lot of websites use it has CDN, but there should be a limit on what companies can do or cannot.

In this particular case, we have blind people blocked from internet, and it doesn't matter if this is not on purpose or it is just a side effect, because in practice they are been blocked, and yet something like this is unable to make a scratch its reputation.

handrous(10000) 1 day ago [-]

Their 'self serve' tiers, and especially the $200 one (the only one with an SLA at all) are really, really good value, is why. Depending on your needs, their enterprise offerings are, too. And boycotts are, broadly, not effective enough to justify any personal risk/harm/expense at all.

wombarly(10000) 1 day ago [-]

Because without CloudFlare we would: Pay thousands in bandwidth costs per month; Double or triple our servers to handle peaks (they cache and serve the HTML for us); Be down constantly because of DDOS attacks.

SimeVidas(10000) 1 day ago [-]

> Why people are still using and promoting Cloudflare

I use Cloudflare because it hosts my website for free.

vorpalhex(10000) 1 day ago [-]

I don't think Cloudflare is intentionally trying to gatekeep the internet. At the same time the road to hell is paved with good intentions.

Their CDN service has allowed a lot more sites to exist than the two it has harmed (and I don't consider those two to be great losses).

However they are certainly becoming an internet chokepoint and we need more alternatives to them for the good of the internet.

uluyol(10000) 1 day ago [-]

Wow geez. There's been a lot of BS being thrown around about Cloudflare. I don't work for them, but I have been following the company for years.

On Cloudflare being a gatekeeper: yes, if you care about load, cost, and attacks, you need one. Cloudflare offers real value to their customers by providing these services. Will that lead to Cloudflare controlling the web? Well, you've got a number of direct competitors (Akamai and Fastly, to name two) in addition to the CDN offerings provided by cloud providers. Cloudflare isn't the first CDN, isn't the largest, and won't have a monopoly on being an internet middleman. Compare Cloudflare's network (https://www.cloudflare.com/network/) to Google's (https://peering.google.com/#/infrastructure).

On the necessity of gatekeepers on the internet: this is the way the internet works. You are responsible for peering with the rest of the worls at a physical location and dealing with the traffic that comes your way. If you want to be close to your users (to avoid bandwidth bottlenecks and provide lower latency), you need to install equipment all over the place and peer with other networks. If you want to deal with bad traffic, you need the capacity and software to handle/filter it. You can always build your own CDN if you want, but the only way to deal with these issues is a CDN. Maybe if the internet worked differently things would be different. But that would be a huge change, especially since someone has to foot the cost of building these services. I guess you could somehow distribute the cost (though I don't see how), but you'd also have to someone deal with the management and development of said infra, and I have no idea how such a thing would work without a central entity being responsible.

Anyway sorry for the rant.

pxue(10000) 1 day ago [-]

Because the pendulum is swinging towards ease of creation over control.

I can spin up a simple web app or a simple cloud function and get it globally distributed in minutes, for free. That's amazing

johnnyApplePRNG(10000) 1 day ago [-]

Cloudflare is indispensable for a number of businesses, crypto exchanges especially off the top of my head.

MattGaiser(10000) 1 day ago [-]

People don't want the Internet gate kept. They do want their sites protected though.

manigandham(10000) 1 day ago [-]

1) Blind people are not 'blocked from the internet'. This is an accessibility issue with one of their security products. It's no different than an employer using other security measures that might limit usage for certain people, but it's the employer who makes the ultimate decision.

2) The reason people keep using Cloudflare is because it has the best product suite and pricing. There are competitors but none have approached the same features or (ironically) accessibility as CF.

3) Mission statements are nothing more than politics and PR. People put entirely too much faith in corporations and their associated mottos as if they're divine principles to live by. It's up to users to make their own rational decisions by weighing the risks, and in that regard, Cloudflare has actively helped fight censorship by helping improving connectivity and access to software, information, and privacy.

vbezhenar(10000) 1 day ago [-]

I like Cloudflare, because it provides some very essential services with free tiers. It is big enough, so I can trust them. I can be sure that they won't inject ads into my HTML pages. I can be sure that their DNS will not replace NXDOMAIN with fake ad responses. I can be sure that they won't log my VPN traffic trying to extract passwords or something like that.

For sure I don't support their decision to ban blind users and hope to see that resolved. But that's not enough to change my mind, not even remotely.

userbinator(10000) 1 day ago [-]

More and more, 'because security' has become the go-to reason, almost a thought-terminating cliche, for destroying freedom and privacy. It's really disturbing to see.

manquer(10000) 1 day ago [-]

Is there a case for ML based advanced screen readers which do not need assistance from the application ?

The problem seems fairly tacklable . Learning what is on a display screen is relatively easier than most computer vision problem spaces. There are many repetitive patterns in typical application UX.

For example let say there is a label for Save Icon that is an image (a Floppy Disk in most apps) and not alt tagged. By visually reading the image of the screen the model should not have to much difficulty in tagging it that as Save button ?

Most consumer / biz app UX do follow many standard conventions if only out of convenience and lack of imagination, so building a learning algorithm around these components should be possible ?

peterkos(10000) 1 day ago [-]

This paper[0] takes a look at something like this, but it's notable that this is seen as a springboard for more accessible-focused design, rather than the beginning and the end (See 'Discussion & Future Work').

[0] https://dl.acm.org/doi/abs/10.1145/3411764.3445186

Edit: I realize I've just linked to the same paper as the comment below. Oh well!

neurostimulant(10000) 1 day ago [-]

Heck, a desktop application that launched from a hot key and immediately highlighted any possible buttons and input fields it can recognize on a screen/window and simulate click on selection is already very useful.

mwcampbell(10000) 1 day ago [-]

This is being worked on. AFAIK, Apple is the first to incorporate this approach into a released product, with the Screen Recognition feature of VoiceOver starting in iOS 14.

nickdothutton(10000) 2 days ago [-]

When requesting new functionality please complete the "revenue opportunity size" field in the Jira and indicate what quarter you expect this opportunity to close.

geofft(10000) 1 day ago [-]

You're not wrong, and the answer is that this sort of thing needs to impact their bottom line somehow - either because customers insist on it as part of a purchase checklist, or because the legal system will actually go after violations, or because they'll lose important employees.

I don't have a real sense of which of those is most realistic.

tomklein(10000) 1 day ago [-]

Out of curiosity: Do screenreaders use OCR nowadays and if so, is it working good or rather bad due to the lost HTML markup?

arp242(10000) 1 day ago [-]

OCR is a poor substitute since it can't really effectively navigate things due to lack of navigational information, recognition of semantic elements like headers, etc.

I'm not blind myself, but I've tried to use some screen readers in the past to get a feel of what it's like. While I'm a very inexperienced user, one thing I noticed is that even with the best designs it's actually really time-consuming compared to regular browsing. I would imagine that an OCR solution would be even more time-consuming, if it even works well at all.

londons_explore(10000) 1 day ago [-]

There is so much scope for using ML to make a screen reader work on any old software.

Yet nobody is really investing in screen readers.

miki123211(10000) 1 day ago [-]

They sort of do. Voice Over on iOS, and it's screen recognition, is probably the most notable example. It even tries to recognize some UI controls and emulate common behaviors (like sliding a slider), for example. It's far from perfect. It might help when you need to click the odd inaccessible button, but is definitely not enough for daily web browsing.

miki123211(10000) 1 day ago [-]

On most Cloudflare-related HN threads, Cloudflare was really active and eager to answer the engineers' questions.

It's notable that this one is different. The fact that it's Sunday afternoon may be part of the reason, but I guess they really don't have anything to say. I'd really love to see their internal Slack now, though.

graderjs(10000) 1 day ago [-]

As someone who builds an open source remote browser myself, this is a non trivial task.

but anyone who wants to attempt to bring accessibility to a pixels only or drawing instructions only remote isolated browser security model is welcome to fork my repository and add that kind of stuff.

I appreciate the importance of accessibility but the tone of that article strikes me as strident and demanding, acknowledging only the situation feelings and difficulty of accessibility users, but not of the developers, nor of the other user groups.

Technically the issue is a trade-off between security and inspectability. the most secure remote browser technology simply sends pixels or in the case of S2 and cloudflare drawing instructions from the remote browser to the local client where the viewport is then presented so there is no HTML JavaScript or css sent to the client... which is the basis of that whole remote browser isolation security model. In order to make that accessible, without having the benefit of the HTML CSS and JavaScript on the client, it's not trivial. The more you expose that information from which you can bring accessibility to the client the greater the attack surface from a security point of view. So it's a trade off.

neom(10000) 1 day ago [-]

For what it's worth, I've known Matthew for many years. Although I wouldn't at all say we're close, I feel like I've had enough conversation to know who he is. Matthew is a good guy, I've never considered him to be tone deaf, and I genuinely believe he has the best interest of the many at his core. That said, the credence given to the visually impaired across the industry is categorically, absolutely, abysmally awful. I've never taken it as seriously as I should in my career, near all decision makers I know don't take it as seriously as they should, and I think shame on me and shame on everyone else. Things should be easier for visually impaired people, a) because it's the right thing to do and b) because it's low hanging fruit. While I don't think Matthew is unique, I do think he has a particularly significant responsibility given how important his technology is. As a shareholder, a friend, and a customer: I hope he takes this seriously, and I suspect he would.

mwcampbell(10000) 1 day ago [-]

I submitted this on Friday, but for whatever reason, it didn't catch on then. Thanks to the HN mods for putting it in the second-chance pool. I've pinged Cloudflare and eastdakota again on Twitter, so let's see what happens.

throwaway42day(10000) 1 day ago [-]

Because the only publicly acceptable answer would be to agree to all the poster's current and future demands, regardless of the cost, priorities, risk of breaking other features, etc. And it never works out because the demands tend to increase over time, and the PR damage of rejecting the very last demand is proportional to the number of ones previously accepted.

Make a thought experiment: think what if Cloudflare answered trying to explain the complexity, risks, and maybe cost estimates for supporting something like that, but refusing to add it right away. Nobody would listen to their reasoning. They would be immediately labeled as blind haters or whatnot, supported by endless news articles and retweets.

Make another thought experiment: assume they comply with the current demands and add the functionality at some fixed cost. Then in the future, the poster decides that the accessibility support is not sufficient and still makes life hard for blind people. He would come up with another set of demands and Cloudflare would again be forced to comply, because nobody would listen to their reasoning. And because it is physically impossible to make a blind person as productive at certain tasks as a non-blind one, there will be always room for improvement and room for more demands.

If you want to truly help the blind, please go ahead and launch a competing product. Or offer an ML-based tool working on top of existing products. Or create Wiki-like system where people would maintain semantic models of commonly used non-accessible sites, letting the accessible tools work over them. But all of that requires hard work, countless hours and numerous trials-and-errors. Trying to strong-arm someone else to put in that effort surely gives a much faster gratification, but it only results in further alienation and ghosting.

Sure, Cloudflare will release an official statement saying how they are committed and dedicated and working and planning and hoping, and the whole thing will get forgotten in a few weeks, but ultimately if you want to someone to help you, maybe try to understand their constraints and find a compromise, rather than trying to use the buzzwords to throw the mob at them.

frakkingcylons(10000) 1 day ago [-]

I think it's more to do with the timing (it's the weekend). You'd really want to talk to the relevant team before saying much. Given that this isn't an urgent worldwide problem, paging team members during their weekend would be the wrong move. They'll probably have a meeting on Monday and I think that's when we'd see an update from them.

mmahemoff(10000) 1 day ago [-]

It's a public company and there's probably only a few people who would be authorised and feel comfortable to speak on behalf of their employer. Most of them have been working hard building the company for years and shouldn't be expected to be on call for a non-production related concern being raised on HN on any given Sunday.

Cloudflare's management is exemplary when it comes to transparent comms, maybe we can wait a day for their response on this one?

_moof(10000) 2 days ago [-]

Fighting discrimination is difficult and can be exhausting. As someone in a (different) protected class I just want to say kudos for doing this work.

dnzkw(10000) 1 day ago [-]

Isn't demanding that non-trivial work is done just to accommodate your class the opposite of discrimination?

chmod775(10000) 2 days ago [-]

At this point browsers are a basic building block of our society.

There is absolutely no excuse for lacking acessibility features.

You might as well say your 'browser' can't render Arabic.

kevin_thibedeau(10000) 2 days ago [-]

> There is absolutely no excuse for lacking acessibility features.

Then how are the kids going to have their flashy Electron apps?

em-bee(10000) 2 days ago [-]

what is the legal situation here? wouldn't laws that require the employer to make accommodations for the disabled simply force the company to not use this tool for blind employees?

the company would have to prove that using this tool is strictly necessary, which i believe is hard to prove, because if it was strictly necessary then everyone at home should be using it too.

there should only be few places where such a tool is strictly necessary, and those places already use it. anyone who only starts using it now when it gets more convenient can't make the claim that they could not do their work without it because they could until now.

brudgers(10000) 1 day ago [-]

The legal situation is akin to speeding. While technically it is illegal to drive 56 in a 55, you won't get a ticket for it. And lots of places the flow of traffic will be 85 in a 65 and the cops are not about to hold things up.

Same with accessibility only there are powerful economic interests at play too.

mwcampbell(10000) 2 days ago [-]

> what is the legal situation here?

Honestly, I don't know.

We may disagree on whether browser isolation is strictly necessary. But to the extent that Cloudflare's marketing efforts convince IT departments that it is, and that it's important to adopt it company-wide, that's bad for blind people unless Cloudflare makes the product accessible. I don't know if their marketing efforts are succeeding, but I'm being proactive here.

novok(10000) 1 day ago [-]

TBH it only becomes an issue when its required for the blind people to use this browser. If I was running a company and ran into this, I would just say the blind people and other unserved edge cases should just use normal chrome until cloudflare delivers the full version.

Security is a probability spectrum, not a binary as many are fond to think of it.

sonicggg(10000) 1 day ago [-]

Why is everyone here saying the same thing, given that we had a Cloudfare employee clarifying that this feature can be disabled, for now, in the machines of those that are visually impaired?

Sebb767(10000) 2 days ago [-]

> A blind acquaintance of mine once lost his job because of a newly added requirement that he use an inaccessible application.

IANAL, but wouldn't this be grounds for a lawsuit?

Ensorceled(10000) 2 days ago [-]

Yes. But then you have to hire a lawyer after just losing your job, survive during the time the lawsuit will take, win the lawsuit ('plaintiff was let go because position was redundant'), collect, resume your job or job hunt with a 'trouble maker' label.

I really wish HN contributors would not suggest the legal system as a solution for these types of problems, it's totally unrealistic.

ushakov(10000) 2 days ago [-]

i'm getting more worried about where Google is going with their accessibility strategy

flutter and the canvas-based google docs are completely inaccessible

heavyset_go(10000) 1 day ago [-]

Several months ago I asked the Flutter engineering director[1] this question[2] on a Flutter 2 HN submission:

> I don't understand how breaking accessibility with Flutter wouldn't mean that companies that use it on the web are violating the ADA.

And didn't get a response.

I'm still left wondering how a company that adopts Flutter on the web wouldn't be violating the ADA by breaking accessibility.

[1] https://news.ycombinator.com/item?id=26335062

miki123211(10000) 1 day ago [-]

Flutter is (somewhat) accessible with the help of an alternate, hidden DOM, only provided if an 'enable accessibility' button is pressed, for performance reasons. Unfortunately, some privacy zealots prevented web browsers from communicating that a screen reader was detected, so we need to press an extra button anytime we visit a Flutter app.

Google Docs has had two relatively good accessibility implementations for a long time, none of which relied on the original DOM, which was hidden from screen readers. The default one relies on pushing raw strings for the screen reader to speak, while the other one (called Braille mode, as the first method couldn't provide braille display compatibility), uses more modern APIs to provide the required information in the DOM, relying on special announcements only where necessary.

konaraddi(10000) 2 days ago [-]

> the canvas-based google docs are completely inaccessible

AFAIK Google docs is still accessible. See the "Additional details" at the bottom of https://workspaceupdates.googleblog.com/2021/05/Google-Docs-...:

Compatibility for supported assistive technologies such as screen readers, braille devices, and screen magnification features, will not be impacted by the canvas-based rendering change. We will continue to ensure assistive technology is supported, and work on additional accessibility improvements enabled by canvas-based rendering

wffurr(10000) 2 days ago [-]

Have you tried using a screen reader with Flutter apps or the canvas-based Docs?

From the very first result on "Flutter accessibility":

>> We strongly encourage you to include an accessibility checklist as a key criteria before shipping your app. Flutter is committed to supporting developers in making their apps more accessible, and includes first-class framework support for accessibility in addition to that provided by the underlying operating system


goodpoint(10000) 2 days ago [-]

Cloudflare is also killing Tor with its blockpages.

It's a global threat to privacy and freedom of information.

supernes(10000) 1 day ago [-]

It's not just Tor, their DDoS protection fails with JavaScript disabled, so sites that strictly enforce it (e.g. linuxquestions.com) are effectively censored for UAs with scripting disabled.

tmikaeld(10000) 2 days ago [-]

It's up the the site owner if they want to block Tor or not, the site owner cloud just as easily have blocked Tor if they where using a normal server.

alisonkisk(10000) 1 day ago [-]

Why does a networking infrastructure product affect the browser visual UI in an way?

> A blind acquaintance of mine once lost his job because of a newly added requirement that he use an inaccessible application.

If this happens to you, please call a lawyer. This is an easy case to win.

mwcampbell(10000) 1 day ago [-]

> Why does a networking infrastructure product affect the browser visual UI in an way?

Because, as the original technology announcement [1] (which I linked in the OP) explains, they're running a remote browser and sending the rendered graphics from that browser down to the local client. So, since they're not putting in the extra work to send the semantic information required by screen readers and other accessibility tools, this breaks accessibility.

> If this happens to you, please call a lawyer. This is an easy case to win.

You're the fourth commenter on this thread to make that suggestion. Please check out the responses to the other three. [2] [3] [4]

[1]: https://blog.cloudflare.com/cloudflare-and-remote-browser-is...

[2]: https://news.ycombinator.com/item?id=28027986

[3]: https://news.ycombinator.com/item?id=28028116

[4]: https://news.ycombinator.com/item?id=28029813

Historical Discussions: Naval Architecture (July 27, 2021: 713 points)

(713) Naval Architecture

713 points 7 days ago by todsacerdoti in 10000th position

ciechanow.ski | Estimated reading time – 32 minutes | comments | anchor

July 27, 2021

When I first heard the term naval architecture I thought it was the artistic practice of designing beautiful boats. It turns out it's a proper scientific discipline dedicated to the engineering of ships.

Over the course of this article we'll go over different aspects of naval architecture. I'll explain how ships are propelled, what makes them stay afloat, and how they're carefully designed to not tip over even in dynamic conditions:

To understand why a ship rocks side-to-side in the wavy ocean waters, we first have to understand that it's water itself that's responsible for all of the ship's behaviors. We'll start with a simple device – a water-filled syringe.


You've probably seen a syringe before. When its plunger is pressed, the contents of the syringe come out on the other end. In the demonstration below we have a syringe connected through a thin hose to another container that has a spring in it. The entire system is filled with water. As you increase the force on the plunger observe the little spring being compressed at the other end of the hose:

By applying force on the plunger we increase the pressure in the fluid, which in turn pushes the little piston and compresses the spring. We can measure that pressure using a pressure gauge. In the demonstration below the sliders allow you to control the applied force and the area of the plunger:

Observe that the pressure P on the gauge is proportional to the perpendicular input force F, but it's inversely proportional to the area A. We can tie these three values using the following equation:

P = F / A

In metric system pressure is expressed in pascals (Pa) named after Blaise Pascal, while in the imperial system pounds per square inch (psi) are commonly used.

Notice that we no longer have any spring to compress. Even though we're applying a force the plunger doesn't move since water, unlike air, is minimally compressible. It's easy to squeeze an empty, tightly screwed plastic bottle, but when filled to the brim with water the bottle won't budge much. At the pressures we're applying here, water practically doesn't change its volume.

With a syringe, we've directly applied a force by pushing on the plunger, however, we could recreate that experiment by putting a heavy, tightly fitting weight on top of water in a container. In the demonstration below you can control how heavy the weight is to see how it affects the pressure read by the gauge:

Note that even when we remove the weight the pressure meter shows a non-zero read. The water itself also has weight so its mass above the point of measurement contributes to the readout as well.

Let's try to quantify the force exerted by the water. Firstly, notice that the shape of the water above the measurement forms a cylinder with height h and a base surface area A:

The mass m of that cylinder of water is just its volume V times the density ρ of the contained water:

m = ρ × V

That highlighted volume V, however, can also be expressed as the area of the base A times the height h which we can plug into the equation for mass m:

m = ρ × A × h

The force of gravity F acting on that water is equal to its mass m times the gravitational acceleration g:

F = m × g

If we now plug all these values to the equation for pressure P = F / A we get:

P = ρ × A × h × g / A

Which we can simplify by reducing the area A to obtain the final equation for pressure P of liquid with density ρ at a depth h under surface:

P = ρ × h × g

Note that the resulting pressure P is independent of the base area, it's only affected by the height h and density ρ of the water above:

In general the density ρ of water is not constant and depends on temperature and salinity, but at the scales we're interested in we can assume its value doesn't change.

If you recall our first demonstration of a loaded syringe, you may remember that the pressure applied to the plunger "travelled" through the hose to act on a spring, which was quite distant from the syringe itself. The very same rules apply here. Observe the pressure shown by the gauge two different spots of this L-shaped container:

The right gauge always shows the same pressure even though in some cases there is less water "directly" above it. This may seem a little surprising, but you'd probably agree that if we removed the green plug from the filled container, the water would come out of the opening because of non-zero pressure in that area.

That behavior of pressure in incompressible fluids is known as Pascal's law. It states that pressure applied to any part of the liquid will be transmitted in all directions. As a result, the pressure in a connected body of water is the same at every level under the free surface.

It's worth stressing that in these static cases the pressure at a given level depends purely on the height of the body of water. This may have some unexpected consequences. For example, pressure at the bottom of these two containers is exactly the same:

Note how much less water there is in a thin straw compared to the wide container. This scenario is known as Pascal's barrel. By making the straw longer and filling it with water we can make the pressure inside the barrel arbitrarily large, causing it to explode. While Pascal himself possibly never performed that experiment, some modern recreations have been successful.


On its own pressure doesn't have any direction – it's a scalar value, just like temperature is. However, the force exerted by pressure acts in the normal, that is locally perpendicular, direction to the surface of the object. We can visualize the local forces created by the pressure with small arrows:

In fact, these forces act not only on the container itself, but also on any object placed in the water as well. In the demonstration below a red brick is hanging on a string. You can dunk it into water using the slider:

From this point on I'll remove the constraints of containers and we'll instead move outdoors where we'll submerge things into vast, peaceful ocean waters. In this new environment the rules of pressure remain the same – after all, we're still dealing with water.

The brick is now hanging on a string that is attached to a scale, which we'll lower. Notice that as a larger part of the brick gets underwater, the weight shown by the scale decreases:

While the horizontal arrows of water pressure forces balance each other, the vertical ones don't, and the pressure exerts a net positive force that pushes up on the brick. All the small arrows add up to the cumulative force known as buoyancy. In this example buoyancy is not strong enough to completely overcome the force of gravity acting on the brick, but it manages to diminish it to some extent, which reduces the weight as measured by the scale:

Note that for the sake of clarity, I shifted the arrows apart a little, but in this simple scenario buoyancy and gravity are actually positioned on the same vertical line.

If we use a wooden block instead of a brick, the situation changes a bit. As the block gets gently lowered, it will reach a point where the water is capable of keeping it on the surface and the string gets some slack:

If we remove the string and manually push a wooden block underwater, the force of buoyancy will become higher than the block's weight. When we let go of the block, the water will push it up:

Once the block gains some speed it may even overshoot its steady position, only to get pulled down by gravity again. After a while the water resistance slows the oscillating movement enough for the block to find its steady balance.

Let's try to quantify the force of buoyancy as created by the pressure. Naturally, the forces here are three dimensional, however, observe that all the horizontal components are matched on the opposite side and they cancel each other out. Only the pressure forces on top and bottom are unbalanced. You can drag the block around to see all the pressure forces acting on it:

The pressure PT on top of the brick acting on the top surface area A exerts the force FT equal to:

FT = PT × A = ρ × g × hT × A

Similarly, the resulting force FB acting from the bottom is:

FB = PB × A = ρ × g × hB × A

And the net force of buoyancy F in the upwards direction is the difference of the two:

F = FBFT = ρ × g × (hBhT) × A

Notice however, that the highlighted difference is just the height of the submerged part:

F = ρ × g × h × A

And the product of height h and the base area A is volume V of the submerged object, which gives us the equation that ties the force of buoyancy to the displaced volume V of fluid with density ρ:

F = ρ × g × V

Displaced volume is the volume of fluid that would normally occupy a space that is now filled by the object. We can also observe that density ρ times volume V is just mass m:

F = m × g

This is force is just the weight of the displaced fluid. This general rule, known as Archimedes' principle, states that the force of buoyancy is equal to the weight of the fluid that the object has displaced. Note that buoyancy does not depend on the weight of the object itself, it's only affected by the submerged volume of that object.

You may wonder if the same rule holds for an arbitrary shape. After all, I've conveniently used a plain wooden block to simplify the volume and force calculations. However, we can use the same method even for smooth shapes by subdividing them into arbitrarily small rectangular prisms.

In the following demonstration you can use smaller and smaller prisms to approximate the shape of a sphere. For the sake of clarity, I'm only coloring the top and bottom surfaces. You can drag the simulation around to see it from different angles:

Notice how quickly a group of blocks starts to resemble the smooth shape of the sphere.

Now that we know that the force of buoyancy depends on the submerged volume we can also analyze what happens when the wooden block is forcefully tilted:

The local forces exerted by pressure are no longer symmetrical so they end up having an uneven effect on the body. At a first glance it may be hard to figure out what the cumulative effect of all those small arrows is. However, just like we can simplify the weight of all particles constituting an object to a single gravity force acting on its center of gravity, we can also simplify the local forces of pressure acting on the surface of an object to a single buoyancy force acting through a center of buoyancy, which I've visualized using a small blue circle:

As it turns out, the center of buoyancy is just the center of gravity of the displaced water. Note that the force of buoyancy may not be aligned with the force of gravity of the object. That misalignment of forces will create torque causing the object to rotate until it finds its equilibrium, which happens when the forces of gravity and buoyancy are aligned.


In some sense, the floating block of wood we've seen so far forms a very simple ship – a raft. It's not a very practical vessel, as it has small cargo carrying capacity. We also want to make sure that a ship can withhold the elements, so ideally we'd use more robust materials. A solid block of steel won't float on its own because its weight is much larger than the force of buoyancy acting in it. However, if we hollow out the inside of the block we'll significantly reduce its mass while maintaining the volume:

What we see here is a very simple hull – the main body of a ship. This tub-like body is now capable of floating, despite being made from steel. Note that this ship behaves very similarly to the wooden block:

This is ultimately how a ship floats. The weight of water it can displace is larger than the weight of the ship itself causing gravity and buoyancy to balance each other. While this hull shape is used on barges, the hull of a typical modern ship looks more like the one in demonstration below. You can drag it around to change the point of view:






This hull is covered with a deck, but some smaller boats may have an open top. The front part of the ship is called a bow. This hull has a bulbous bow which is the "nose" in the front bottom part – for larger ships it improves the flow around the ship, reducing drag. The back of the ship is called a stern. The left and right side of a ship are respectively called port and starboard. The fin-like object in the back called a rudder is used to steer the ship. The fan-like device next to it is a screw propeller, which, when rotated by an engine, pushes the ship forward.

Most ships have a streamlined shape, which reduces the resistance of motion when sailing as the bow can part water more easily. Other than drag considerations, it may seem that the shape of the hull can be more or less arbitrary. However, naval architects have to consider another very important factor – the stability of the ship.


So far all of the bricks and wooden blocks have been floating in pristine conditions, but in practice open waters are very rarely perfectly calm. Disturbances caused by waves and wind will usually rock the hull from side to side a little:

If you look at the ship from the front, you can see the so called angle of heel, which defines how far a vessel is tilted away from vertical. This front view will be very important for our considerations.

Firstly, let's see how the proportions of the hull affect the behavior of the ship when external forces are applied. In the demonstration below, the first slider controls the wind, which will tilt the ship one way or another. The second slider changes the proportions of the hull:

For relatively wider shapes the ship tilts a little due to wind, but it finds its stable position and will return back to vertical when the wind stops. For more vertical shapes, however, the ship will tilt and capsize even when the wind stops. For most ships this is a catastrophic condition.

To understand what's going on, we need to look how the force of gravity and buoyancy act on a tilted hull. As we've seen before, when the ship is tilted the two forces are separated by a certain distance shown below in white:

That white line is the righting arm, which is the horizontal distance between the two forces. The force of buoyancy acting through this arm exerts a rotating torque on the ship. The curly arrow at the top shows the direction of the turn. For a short and wide cross section of the hull, the force of buoyancy acts against the tilt of the ship, helping to straighten it up. For a more vertical shape, however, the buoyancy acts with the tilt of the ship, causing it to rotate even further!

We can visualize the length of the righting arm for different heel angles of the hull using the following plot. When the angle of heel is in the green zone, buoyancy helps the straighten the ship. However, if the ship's angle is in the red zone, buoyancy tries to heel the ship even more:

Some hull shapes are inherently unstable. The slightest deviation from pristine vertical balance will make the ship flip. However, even hull shapes that are initially stable at some angle reach their limits. All of these examples assume the deck is perfectly sealed and that water doesn't get into the hull.

Moreover, it's not just the rectangular proportions that affect the stability of the hull. The ship's safety is also affected by its cross-sectional shape:

At first glance it may appear that less rectangular, slimmer shapes are inherently better. However, many ships are expected to operate only at reasonable angles of heel out of concern for the safety of passengers and cargo. Within that range the stability is comparable. Additionally, bulky, rectangular hulls allow much higher cargo loading capabilities.

Another way to look at stability is to consider the metacenter, which lies at the intersection of the vertical from the original center of buoyancy with the vertical from the heeled center of buoyancy. I visualized that intersection with a white dot:

As long as the metacenter stays above the center of gravity then the heeled ship will try to return to vertical.

Naval architects designing the hull have to ensure that the ship will remain stable across the expected spectrum of angles. Some ships, such as sailboats, are designed to handle a much higher critical angle of heel, but their construction also employs a trick of lowering the center of gravity by using additional weight attached below the hull. This ensures that the metacenter can stay above the center of gravity for a much larger range of angles.

While a ship and its machinery have a relatively fixed mass and position this unfortunately can't be said about the cargo it carries.


Most ships sailing on ocean waters carry some sort of cargo, quite often packed in standardized shipping containers. Let's analyze what happens to the ship as we load it up. In the demonstration below, we're looking inside the ship. You can control both the number and vertical position of the containers:

As containers are added the ship will sink a little and increase its draft – the distance between the bottom of the hull and the waterline. A new balance is created between the increased weight due to the cargo and the increased buoyancy due to larger volume of the submerged part. However, if the heavy cargo is placed too high, the ship will spontaneously capsize after some time.

Let's analyze how a change of the vertical position of containers on a big ship affects its center of gravity and thus the stability curve:

If the cargo is placed relatively low, it can actually significantly increase the stability of the ship. However, for some positions of cargo, the ship's stable position is titled to one side or the other, even though the geometry of the hull and carried load are perfectly symmetrical – that angle is known as angle of loll. When the cargo is placed very high this ship becomes completely unstable.

Naturally, when the cargo itself is not horizontally balanced the ship will also find a stable tilted position, which you can witness in the demonstration below. The slider controls the horizontal position of the heavy box:

Once again, the ship will find its static equilibrium at some angle, known as angle of list, which this time is caused by the lateral imbalance. You may have experience that tilting when you lean on a side of a small rowboat or a kayak.

So far we've been keeping the cargo in place, either locked into slots, or tied down to ship itself. However, if we let the this heavy box slide the results can be truly catastrophic:

As the ship tilts at some point the friction between the cargo and the floor isn't high enough to keep the box in place and it starts to slide. This in turn shifts the center of gravity of the ship farther to the side, which causes even bigger tilt. Even when the wind stops the ship won't return to vertical. We can clearly see how unavoidable the problem becomes on the diagram:

Heavy cargo on a ship has to be locked in place so that it doesn't change the ship's balance. While it's relatively easy to do for boxes, crates, and containers, some forms of cargo are more difficult to tame.

Free Surface

A tanker ship can be used to carry chemical, crude oil, or or even orange juice. Notice what happens to this tanker ship as it tilts with the wind. You can also change the level of liquid in the tank:

Once the ship starts to tilt the liquid inside will move to the side as well, which changes the center of gravity of the ship, which causes even further tilt. Even when the wind stops the ship won't return to the straight position. However, the opposite wind may be able to move the ship to the other extreme. It's worth pointing out that neither empty nor full tanks exhibit this problem.

This free surface effect is very dangerous for the ship's stability. Even ships that aren't purposely carrying liquid cargo still have to keep fuel and ballast water on board. Moreover, small, bulk materials like sand, gravel, and grains also exhibit a fluid-like behavior and will move around when the relative direction of gravity changes. One of the most straightforward solutions to this free surface problem is to separate the liquid into multiple compartments. This severely limits the movement of the liquid making the ship much more stable:

Free surface effect also creates particularly dangerous conditions when a ship's hull is breached and water starts flooding the vessel. The heavy tilt can make the evacuation efforts of the crew and passengers much more difficult despite the ship still being technically afloat.


For the final discussion of stability let's look at the ship in waves. For prettier visuals we'll top our hull with a bridge – a platform from which a ship is commanded. The slider controls the amplitude of the waves:

Notice that as the wave passes through it changes the size and position of the underwater volume of the hull, which in turn shifts the center of buoyancy, causing the ship to tilt.

Every ship has its natural roll frequency, which determines how quickly a ship rocks side-to-side when disturbed by an external force. When the waves approach the hull at comparable frequency the ship can exhibit a resonant behavior, similarly to how pushing a swing at the right time will make it swing more and more. Naval architects can affect ship's rolling behavior with both static and dynamic devices like bilge keels or antiroll tanks.


For the final part of this article let's discuss how a ship manages to move forward. While sails have been a dominant form of ship propulsion for thousands of years, they can't power a vessel if there is no wind. Modern day ships typically use internal combustion engines that power screw propellers, which look roughly like this:

The shape of a propeller may seem fairly complicated, so let's try to devise it from first principles. A job of a propeller is to push water backwards, which, by Newton's third law, pushes the propeller and the ship forwards. To push the water away we'll use a few paddle-like blades attached to a hub, all rotating on a shaft. We'll start with just three blades placed symmetrically around the axis of rotation. Observe that we have some freedom in how we orient them:

As the propeller rotates around its axis, the forces its blades exert on water vary quite a bit depending on the blades orientation. In the demonstration below the blue arrows show the forces exerted on water by each blade. The sum of all these forces is shown with the yellow arrow and the induced swirl is depicted with the red arrow:

When the blades are oriented sideways they push a lot of water, but only in the swirly motion and the forces balance each other out. The engine's work is wasted on dragging the paddles through the water. As we reduce the angle the projected area of the blades in the direction of rotation decreases and so does their pushing power. At limit it becomes negligible when the blades are perpendicular to the axis of rotation.

At intermediate angles some of the work done by the blades is still used to swirl the water, however, the blades also axially push the water away. This causes the propeller and the boat to be pushed in the opposite direction. That propeller generates thrust.

In this very simplified analysis we seem to achieve the largest thrust roughly in the middle between the two extremes, but notice that we still have a relatively large drag induced by the blades. In practice the most efficient angle of attack will be much lower. We can visualize it in a top down view of a single blade. For top performance the blade's direction should stay within the green region:

A propeller will work most efficiently if the velocity of a blade relative to water is in that angular range. Let's look at the final velocity of the blade a bit closer. Note that while different parts of the blade have the same angular velocity, they have different linear velocity, so the moving blade approaches water with rotation-induced velocity which varies with the distance from the center of the propeller:

Moreover, a functional propeller will push the ship forward, causing it to sail at some speed, so the vessel and its propeller also have some forward velocity relative to the water:

The sum of rotation-induced velocity and the "forward" velocity creates the final velocity of the section on the blade. In the following demonstration, the right side shows the frontal view of the blade, making it easier to see the selected section:

Notice that the area of good efficiency is fixed against the final velocity of the blade. Since different parts of the blade have different velocity against water they should also have a different angle of attack so that locally they can function at maximum efficiency. To account for this the propeller blades have a characteristic twisted shape – their angle of attack decreases with radial distance. In the demonstration below you can control the amount of that twist in the blades:

In a simplest form the twisted surface of a blade is a part of a helicoid, which is a surface swept by a segment that is perpendicular to an axis while simultaneously rotating and moving along that axis:

A helicoid like the one above is defined by a radius which specifies the extent of the segment and a pitch which describes the distance travelled along the axis during a single revolution. In fact, a helicoid was also used in Archimedes' screw which served as the base idea for first screw propellers employed in ships. With the twist in place each part of the blade stays within the optimum range:

The final consideration is the total blade area, which is a product of the surface of each individual blade and the number of blades. You can change the latter in the demonstration below:

Notice that in the back view a higher number of blades occupies a larger part of the entire circular shape. The bigger the total blade area the larger the thrust, but only to some extent as the flow around one blade starts to affect the flow around the other blades.

The blades of modern propellers have an airfoil cross section, which contributes to the additional lift on the blades, thus improving their efficiency. However, the pursuit of blade lift has its limitations. When the pressure on the pushing side of the blade increases it also simultaneously decreases on the other side. If that pressure reduces too much the water can locally boil, in effect known as cavitation. When those vapor-filled bubbles repeatedly collapse on the surface of blades they can cause significant damage.

I need to point out that what I've discussed above was a simplified analysis of how screw propellers generate thrust – there are entire books dedicated to hydrodynamics of propellers. Propeller design is a complex topic and even minor efficiency gains can result in big savings on fuel used to power the ship's engines.

Further Watching and Reading

Casual Navigation is a YouTube channel dedicated to maritime concepts. In his videos Rob analyzes famous capsizings, discusses antiroll techniques, and explains why the bottoms of the ships are red. The recordings strike a good balance between entertainment and education.

When it comes to books, I recommend Applied Naval Architecture by Robert Zubaly. This entry level publication expands on everything I've discussed and touches on other topics like ship strength and floodability.

For a different take on boats I recommend YouTube channel Tips from a Shipwright which is dedicated to documenting the process of building and restoring smaller boats. Over the course of the last few years Louis Sauzedde has recorded his work on two full projects – a work skiff and a dory. Both series show great craftsmanship and expertise. It's a real joy to follow Lou's progress in his workshop.

Final Words

With experience acquired over millennia, naval architects have mastered the art of controlling the forces acting on hulls to make sure the ships, their passengers, and cargo arrive unharmed at their destination.

Both traditional and naval architects have to devise functional, safe, and habitable structures. However, naval architects face the additional challenge of designing for an ever-changing setting for their creations – the harsh and unpredictable sea.

All Comments: [-] | anchor

dirtyid(10000) 7 days ago [-]

Very intuitive. I wish there was a list of exemplar visualizations for different subject matters. It's 2021, there's still a lot of bad textbooks out there, emphasis on books.

garaetjjte(10000) 7 days ago [-]
mncharity(10000) 7 days ago [-]

I'm reminded of https://www.youtube.com/watch?v=ckaJs_u2U_A , an aluminum foil boat floating on dense SF6 gas, which I think fun.

supernova87a(10000) 6 days ago [-]

That demo is always fun, but I cringe at the use of SF6. That stuff is 23,000x the potency of CO2 in terms of greenhouse gas potency.

_Microft(10000) 7 days ago [-]

Some hull shapes are inherently unstable. The slightest deviation from pristine vertical balance will make the ship flip. However, even hull shapes that are initially stable at some angle reach their limits. All of these examples assume the deck is perfectly sealed and that water doesn't get into the hull.

Loosely related: here is a video of the German Maritime Search and Rescue Service (DGzRS) trying to 'sink' one of their (then new) smaller rescue lifeboats which has self-righting capabilities:


(Ofcourse it was a test if it does have these capabilities, not an attempt at actually sinking it.)

dtgriscom(10000) 7 days ago [-]

Interesting. The designers can probably analyze the rate at which the boat righted to quantify its stability.

lambdasquirrel(10000) 7 days ago [-]

One cool thing to think about is the effect of tumblehome hull forms. Kind of makes you check in, that you really know what the center of buoyancy is.


jasonwatkinspdx(10000) 7 days ago [-]

Yeah, this is a fantastic blog post but is a little inaccurate in some edge cases.

In solo around the world races like Vendee Globe, the boats are required to be fully buoyant and self righting no matter how they end up. The most common approach to achieving this is to rig a canting keel with a device that when the boat capsizes, lets the keel swing to one side, creating a weight imbalance that rights the boat. They're quite serious about it too: you don't get to race the boat unless you demonstrate it works that way at the pier.

ljhsiung(10000) 7 days ago [-]

Does anyone know how he creates these animations? I like the representation and would like to create them as well.

jimhefferon(10000) 7 days ago [-]

Expanding on that question, does anyone know of a place where work like this gets discussed? I was unaware of his stuff, which is indeed wonderful, and if there is a way to meet with others who are interested in this kind of thing, and in doing it for ourselves, I'd sure like to be there.

thamer(10000) 7 days ago [-]

It looks like raw HTML5 canvas with some WebGL (2D): https://ciechanow.ski/js/navarch.js with some helper functions in https://ciechanow.ski/js/base.js

mihaifm(10000) 7 days ago [-]

Also interested. Looks like a lot of it is JS code written by hand. This is certainly readable code: https://ciechanow.ski/js/navarch.js

fuzzylightbulb(10000) 7 days ago [-]

I had the same question. (Putting this here so that I can come back later.)

WalterBright(10000) 6 days ago [-]

> at the scales we're interested in we can assume its value doesn't change.

The reason submarines can be neutrally buoyant at specific depths is because water is compressible, and water's density changes with depth. Adjust the submarine's density to match the water's density at a certain depth, and the sub will be neutrally buoyant at that depth.

IshKebab(10000) 6 days ago [-]

No it isn't. Submarines can be neutrally buoyant at any depth because they have the ability to control their density. The fact that water is slightly compressible has no effect on submarines' operation.

nradov(10000) 6 days ago [-]

Submarines seldom operate at neutral buoyancy. Usually they rely on the planes and propulsion for depth keeping.

WalterBright(10000) 6 days ago [-]

The Wright brothers, in trying to figure out how to design an airscrew (propeller), assumed it would work like a ship's screw, and went looking for the theory behind it.

There was no theory, ship's screws were designed by trial and error.

So the Wrights invented the first propeller mathematical theory. It produced propellers that were 90% efficient, about double the efficiency of other experimenters' ad hoc propellers.

Double the efficiency meant the Wrights needed half the horsepower to get into the air.

Stevvo(10000) 6 days ago [-]

Interestingly, whilst computers play an ever more dominant role, there is still a large amount of trial and error that goes into hull design.

All famous hull designers draw their curves by hand.

Ichthypresbyter(10000) 6 days ago [-]

>There was no theory, ship's screws were designed by trial and error.

The early marine propeller designs consisted of an Archimedes-type screw with multiple full turns. During tests of one such design on a small boat in the Paddington Canal in London, half of the propeller broke off. The broken propeller (with only one turn) turned out to be able to propel the boat twice as fast. [0]

The inventor, Francis Smith, amended the patent to describe either a single-turn screw propeller or one with two screw threads each describing half a turn (essentially a two-bladed propeller).


WalterBright(10000) 6 days ago [-]

The propeller efficiency aspect is why I don't believe all the other 'first powered flight' claims. The Wright Flyer had barely enough power to get airborne, and that's with the double-efficiency propeller.

Attempts to build flying replicas of the other claimants' machines don't impress me because they don't address the power needed to get those contraptions into the air with the engines available at the time. (The Wrights couldn't find an engine with the power/weight needed, and had to design/build their own powerplant.)

jonshariat(10000) 7 days ago [-]

'It's worth stressing that in these static cases the pressure at a given level depends purely on the height of the body of water.'

How did I not know this? It's so counter intuitive that a thin column of water can cause the same pressure as a wide one.

The video they link shows this in action: https://www.youtube.com/watch?v=EJHrr21UvY8

One mind bending fact she shares in the video is that a thin layer of water, touching the damn wall, is the same pressure as an entire lake.

zwkrt(10000) 6 days ago [-]

A fun thought experiment is to realize that if the earths atmosphere were totally removed except for a cylinder that encircled your house and went into space, you would feel physically the same. Just like in the water, in some sense the only thing air pressure cares about is how much air is directly on top of your head.

abraae(10000) 7 days ago [-]

I'm building a system for measuring levels in water tanks using submersible pressure sensors (triggered by living in a dry area and being totally dependant on our tanks).

Quality sensors cost a lot - too much for domestic purposes. Much cheaper ones can be bought from China, so I've been looking for some way to test them, without actually altering the level in a gigantic water tank.

It occurred to me I should be able to just use a thin vertical pipe. But as you say, this seems counter intuitive, especially if the pipe is barely wider than the sensor itself. Just doesn't... Feel right.

morpheos137(10000) 7 days ago [-]

why is it counter intuitive for you? It is not to me at all. Gravity pulls down. There is essentially no lateral component to gravity. Height is measured in the verticle dimension, the same as gravity. Now imagine water column as a stack of pennies. The more pennies are added to the stack the more pressure is on the lower pennies. It does not matter how many stacks are in front of or behind or to side of the stack you are looking at.

zarzavat(10000) 6 days ago [-]

Perhaps you are conflating pressure and force?

Pressure is force per area, the area doesn't matter by definition. Similarly to how we measure rainfall in millimetres: volume / area = length.

Whereas if you were to place a bucket of water on your head, the area of the bucket would surely make a big difference to the force you feel, all else being equal.

pkaye(10000) 7 days ago [-]

Its basically the Bernoulli's equation. Its because pressure is force over area and the mass of the body of water above it is area times height time density so the area cancels out. You can add velocity into the equation and its a conservation of energy equation. Similarly there is a continuity equation which is a conservation of mass. These two are the backbone of a beginning fluid mechanics course in engineering.

tobmlt(10000) 7 days ago [-]

Fluid has so much to bend the mind. Soliton waves, shocks, expansions, critical transition phenomena (besides phase transition) Look at froude number and planning hulls, the purpose of chines, steps, etc. in a high speed hull to manage skin friction vs wave drag. Wave Dispersion, wave superposition, etc. the free surface itself means if you are solving for flow, flow then determines the free surface which then determines the flow.. add infinitum. It's nonlinear like a baby general relativity in that way. The shallow water equations are hyperbolic so you get shocks etc. deep water, long wavelength waves act in linear fashion so you get superposition effects. On and on. Fun times.

vishnugupta(10000) 6 days ago [-]

For me the aha/eureka moment about the force of water pressure was when I read that water jet is used sometimes to precision cut diamonds.

palijer(10000) 7 days ago [-]

This is one of those physics phenomenon where I feel like they are a software bug. Bell's Theorem and a lot of quantum entanglement stuff is like that as well.


ummonk(10000) 7 days ago [-]

That's interesting because it seems perfectly intuitive to me.

Both in terms of understanding the physics (weight of water above the column divided by the area of that column, and then any water around the column just has to have the same pressure to contain that column) and just plain practical experience from e.g. dipping underwater in the ocean and not getting crushed like a bug.

marcosdumay(10000) 7 days ago [-]

On those oddly shaped reservoirs, the walls compensate for the lack of a water column above the places where it widens. The actual force on the water is the same as would be in a cylinder.

gk1(10000) 7 days ago [-]

Naval architecture is a fascinating and beautiful discipline. This post does it justice.

It's too bad there aren't many naval architecture careers in the US. We hardly design or build any ships here anymore. The one exception is military ships. So if you have a naval architecture degree your main employer options are a) government or b) government contractor.

Source: Naval architecture degree.

ghoward(10000) 7 days ago [-]

Hey, you might be able to answer this: if someone who wants to learn naval architecture deeply (but not for a career), how should they go about it?

I'd love to design ships as a career, but as you said, there isn't much work, but why not learn for the sake of learning?

Also, aeronautical engineers, I'd love to learn that too. How to go about it?

sandworm101(10000) 7 days ago [-]

>> It turns out it's a proper scientific discipline dedicated to the engineering of ships.

No. It is about the engineering of all sorts of things. Ships are a subset. I'd say that it covers all things that float, but that wouldn't include docks, cranes and other things that integrate with ships.

>>As containers are added the ship will sink a little and increase its draft – the distance between the bottom of the hull and the waterline.

This is the wikipedia answer. In the real world 'draft' is the lowest part of the ship, which might be something other than the hull. Sailboats especially measure draft from the bottom of their keel, a thing lower than the hull. The 'hull' is the watertight body and doesn't include things like keels and rudders which, while uncommon on large vessels, normally extend well below the hull's depth.

opium_tea(10000) 7 days ago [-]

It's amazing what different people take from articles. That someone would read through this page and instead of appreciating the effort and craft their response would be an absolute textbook example of tedious internet pedantry.

cjdell(10000) 7 days ago [-]

This page is so well done. All physical and mechanical problems should be taught this way. I just loved playing with the sliders. I felt like I didn't even need to read the text to understand the concepts. This could be a great alternative teaching style for bored kids.

defaultname(10000) 7 days ago [-]

This is the person who did the astonishing internal combustion engine and camera entries with the same dedication to detail.



They do fantastic work.

Historical Discussions: Father builds exoskeleton to help wheelchair-bound son walk (July 27, 2021: 684 points)

(684) Father builds exoskeleton to help wheelchair-bound son walk

684 points 7 days ago by geox in 10000th position

www.reuters.com | Estimated reading time – 2 minutes | comments | anchor

PARIS, July 26 (Reuters) - 'Robot, stand up' - Oscar Constanza, 16, gives the order and slowly but surely a large frame strapped to his body lifts him up and he starts walking.

Fastened to his shoulders, chest, waist, knees and feet, the exoskeleton allows Oscar - who has a genetic neurological condition that means his nerves do not send enough signals to his legs - to walk across the room and turn around.

'Before, I needed someone to help me walk ... this makes me feel independent,' said Oscar, as his father Jean-Louis Constanza, one of the co-founders of the company that makes the exoskeleton, looks on.

'One day Oscar said to me: 'dad, you're a robotic engineer, why don't you make a robot that would allow us to walk?'' his father recalls, speaking at the company Wandercraft's headquarters in Paris.

'Ten years from now, there will be no, or far fewer, wheelchairs,' he said.

Other companies across the world are also manufacturing exoskeletons, competing to make them as light and usable as possible. Some are focused on helping disabled people walk, others on a series of applications, including making standing less tiring for factory workers.

Wandercraft's exoskeleton, an outer frame that supports but also simulates body movement, has been sold to dozens of hospitals in France, Luxembourg and the United States, for about 150,000 euros ($176,000) a piece, Constanza said.

It cannot yet be bought by private individuals for everyday use - that is the next stage the company is working on. A personal skeleton would need to be much lighter, Wandercraft engineers said.

Just outside Paris, 33-year-old Kevin Piette, who lost the ability to walk in a bike accident 10 years ago, tries one on, walking around his flat, remote controller in hand.

'In the end it's quite similar: instead of having the information going from the brain to the legs, it goes from the remote controller to the legs,' he said, before making his dinner and walking with it from the kitchen to the living room.

Reporting by Yiming Woo Writing by Ingrid Melander; Editing by Janet Lawrence

Our Standards: The Thomson Reuters Trust Principles.

All Comments: [-] | anchor

ryeguy_24(10000) 7 days ago [-]

I have a 3.5 year-old and just had twins. A story like this makes me feel very warm inside. Being a relatively new father, I'm beginning to understand the unconditional love and sacrifice that grows when you have kids. I commend this father for doing this. He is a modern day hero.

throwaway69123(10000) 6 days ago [-]

It got me good.

elisee(10000) 7 days ago [-]

Video of the exoskeleton in action: https://www.youtube.com/watch?v=-yBfUcFRZ-I

SamuelAdams(10000) 7 days ago [-]

This is really cool! See to me this is what hacker news should be all about. A group of people facing some problem and hacking something together that works really well.

samstave(10000) 7 days ago [-]

One of my favorite movies 'Edge of Tomorrow' has dope exoskeletons, and while its a movie - thats where this is heading...

I am surprised Boston Dynamics hasnt made one! If you converted the agility of Atlas into an exoskeleton, that would be pretty interesting...

nanodeath(10000) 7 days ago [-]

Similarly I thought of Death Stranding :)

Though I guess exoskeletons are not uncommon in sci-fi.

baby(10000) 7 days ago [-]

Reminds me of matrix as well

capekwasright(10000) 7 days ago [-]

They actually did, as part of DARPA's Warrior Web program [1]. They ultimately spun off the program to Ekso Bionics as a consequence of the Google acquisition back in 2014 [2].

[1] https://www.army.mil/article/125315/darpas_warrior_web_proje... (the first image is of BD's system; you can just make out 'BOST' around the center of the left thigh)

[2] https://www.globenewswire.com/fr/news-release/2014/10/02/670...

nzeribe(10000) 7 days ago [-]

Aliens, 1986: 'I have a class-2 rating.' https://www.youtube.com/watch?v=YPMk-EEyOpE

xor99(10000) 7 days ago [-]

This is inspiring. The cost can be brought down through use of soft and/or lightweight materials. For example, the use of textile based actuators. Good examples in link below:


These things are not for the near term so really impressed by this story.

samstave(10000) 7 days ago [-]


You know what I am wondering, the accelerometers in phones are super small, cheap and there are some sensor systems that incorporate a ton of other sensors (whatever happened to Google's smart jacket they were making with Levis)

Anyway, it would be very interesting to have a textile with a fabric of accelerometers woven into the fabric... Solar panels on the thighs and what not to power the sensors... with a compression-heel in a shoe / boot to ad more power with every step.

Anyone know how much power an accelerometer takes and if you can wire a matrix of them together to a set of controllers that all they do is capture the telemetry from the sensors?

Check out this video on the making of accelerometers


m0rphy(10000) 6 days ago [-]

Boston Dynamics should be making one of these. It fits really well with their expertise and is much better for them to be making these rather than robots that get sent to wars.

arpafaucon(10000) 6 days ago [-]

I'd love to see that! I tend to think that some of their technical choices on Atlas would have to revised, though. Having hydraulic actuators (very good power/weight ratio) makes the robot quite noisy, which is fine for a robot, but less cool if there is someone stuck inside it a whole day

throwaway-571(10000) 6 days ago [-]

More money in war than in peace application.

mtwittman(10000) 7 days ago [-]

The research and engineering of new mobility aids is fine, but the inventor/CEO's quote, 'Ten years from now, there will be no, or far fewer, wheelchairs' belies an attitude strongly criticized by many in the disability community. And in the video footage he says, 'wheelchairs are an anomaly, men and women, human beings are meant to be upright'. Imagine saying 'Bicycles an anomaly—humans are meant to be upright, not in some aero-dynamic tucked position.' Deriding one form mobility tech (an asset to more than 10,000,000 people) to promote another potential one is disappointing.

The Exoskeleton's Hidden Burden [0] is a good article that includes the history of exoskeleton development (goes back to 19th c. Russia):

[0] https://www.theatlantic.com/technology/archive/2015/08/exosk...

playpause(10000) 6 days ago [-]

He's not deriding wheelchair tech. He's saying wheelchair tech is not good enough, disabled people deserve better, and he's doing something about it. I'm not convinced wheelchair users would be better off if he embraced the wheelchair-positive attitude that you are arguing for.

caturopath(10000) 7 days ago [-]

> 'Ten years from now, there will be no, or far fewer, wheelchairs,' he said.

I wonder how much money he'll put on that.

theslurmmustflo(10000) 7 days ago [-]

if battery technology keeps improving, I'm not sure why that wouldn't be the case for people with long term disablities

bredren(10000) 7 days ago [-]

Ten years seems fast, but I foresee a revolution in local micro manufacture that is a mashup of 3d printing as we know it and local assembly of off the shelf parts.

This will lead to a revolution of generic complex product availability that ignores patents and trademarks.

An exoskeleton may not be the first type of product to be ordered and assembled from a few blocks away. The amount of QA for safety would need to be high.

However, if the price of such things is 1/10 or even 1/20th that of the branded, official version people will continue to turn to hyperlocal, small-run manufacture for even the most safety-critical products.

ad404b8a372f2b9(10000) 7 days ago [-]

I share your skepticism but I wouldn't question the conviction of a guy who built a company and worked on this project for 10 years to help his son walk again.

manmeet(10000) 7 days ago [-]

This is amazing. I built one to help my nephew walk, and now selling commercially ( http://trexorobotics.com )

I am fed-up at the lack of options available to individuals. People thought that everyone will get an exoskeleton and be able to walk with it everywhere. But the industry ran into many challenges.

A big one that many dont understand is getting insurance coverage. The way the US healthcare system is designed, it will only cover restoration of mobility, not a restoration of function. So, from their perspective, a wheelchair and some pain meds can do the job easily.

I believe that they key is to start with children, this is where you have families desperate for a solution, higher costs due to them growing and spending their entire life in a wheelchair, and the option to truly have a life changing impact.

But things are changing, people are starting to notice the work that we are doing. We need a lot more people building exoskeletons and similar powered orthotics!!

prawn(10000) 6 days ago [-]

Incredible story - the tech looks brilliant. What a contribution to your nephew, sibling and the world.

aprdm(10000) 6 days ago [-]

Your Team page is giving a 404, was really curious !

Great job. This is truly Amazing.

ricopags(10000) 6 days ago [-]

Congrats on the amazing product! A minor note on the website:


That's too much getting in my face without giving me a chance to browse.

If I'm visiting the website, I'm probably going to be able to answer some of those questions without having these distracting offers thrown into my face immediately.

renewiltord(10000) 6 days ago [-]

I've always wondered about these devices. How do they work? Like they detect muscle contraction and amplify the motion? Or is it a purely mechanical device?

CountDrewku(10000) 6 days ago [-]

>The way the US healthcare system is designed, it will only cover restoration of mobility, not a restoration of function.

You'll find that countries with socialized medicine won't cover more than that either. It's too expensive, regardless of whether your covered privately or publicly. Socialized medicine is typically much harsher on keeping costs down as well.

Neurocynic(10000) 6 days ago [-]

A big issue for industry is medical certification too. Have you done yours? An exoskeleton like yours falls under Class II medical device and would require, at the minimum, a 510(k) notification to be filed.

ajoy(10000) 7 days ago [-]

Great work!

FYI, your 'Team' link in the footer is not loading properly.

rubicon33(10000) 6 days ago [-]

What a truly awesome product and mission. Inspiring, to say the least. Are you hiring software engineers?

jandrese(10000) 7 days ago [-]

I had thought when the Segway came out that a wheelchair version would follow shortly afterward where a person without functional legs could get on a saddle and move around quickly and at a typical human height so things on shelves aren't such a problem.

Segway hit the market 20 years ago and it still has not happened. I get that there are a number of complications (getting on and off is a challenge), but it seems like it should be solvable. The technology has only improved over the years, especially the batteries. This should be doable.

fortylove(10000) 7 days ago [-]

I don't have anything to add, but your product looks fantastic and I can only imagine the joy it brings to patients and families. Nice work!

sholladay(10000) 6 days ago [-]

I have a family member who needs something exactly like Trexo but for an adult. Are you able to make them for adults?

sillysaurusx(10000) 7 days ago [-]

Talk about impactful work. I wonder how many lives you've changed.

Well done. This is something close to miracle tech, at least for the people wearing it.

I can't help but feel curious whether they're a viable alternative to wheelchairs, or if it's a temporary feeling (kind of like riding on a rollercoaster, in that you go and do it for the experience and then return to your normal life). But that's just my ignorance talking.

Also, fuck the US insurance system. You won't find many topics that make me talk that way, but as I get older it feels something closer to pure evil. I've met so many people who have been screwed over by that system (and personally experienced my share of it).

There's a woman I've been texting with who I met at a gas station. She was clearly in distress, so my wife and I offered her a ride home. To cut a long story short, she spent her fourth of july miserable, and when I raised the idea of getting prozac or some sort of antidepressant (my own 'miracle tech'), she said 'Oh, I used to be on that. I can't afford it because no insurance' and I practically flipped my phone onto the concrete. She could be living a normal and happy life.

I can't imagine how much worse it is for parents who otherwise need to spend $thousands for alternative solutions like this. If you can make it in any way affordable, it'll change countless lives, I'm certain.

bottled_poe(10000) 7 days ago [-]

Inspiring. You're doing amazing work.

pkaye(10000) 7 days ago [-]

I'm curious why it is so expensive though. Its works out to $36k paid over 3 years to own it.

blairbeckwith(10000) 7 days ago [-]

Just wanted to say thanks for building this.

optymizer(10000) 7 days ago [-]

I was looking at the videos, and realized that maybe we don't need small exo-bones (societal expectations and normalcy aside), maybe it's easier to go the other way and make exo-robots, since they'd be bigger with more room for batteries, could balance on their own and a human could be sitting/standing inside, driving the legs with some input method - either with legs, or hand gestures.

I'm picturing the robot in Avatar [1], but with an open top and much less threatening and not weaponized [2], like big robotic pants. If I were to quit my job, it would be to make human robot minotaurs a reality, but then again, what do I know about robots?

[1] https://www.youtube.com/watch?v=F6ttDZFmGqg

[2] https://www.pcmag.com/news/toyotas-latest-humanoid-robot-can...

chime(10000) 7 days ago [-]

> So, from their perspective, a wheelchair and some pain meds can do the job easily.

And not even a good wheelchair. For my wife recently diagnosed with MS, they would only approve of a basic, featureless, uncomfortable one after I would pay the $3000 deductible plus 20% coinsurance. Instead, I got a light-weight folding electric wheelchair with nearly full-day worth of battery (15 miles), with a spare battery, adjustable headrest for $1300 off Amazon. Add an octopus-tripod fan with 10hr battery, golf-cart umbrella, bendable cane holder, bottle holder, and an A/C fan jacket for a total of $200 and now she is able to spend a few hours out with our kids at museums, aquariums, and zoos.

It was literally cheaper for me to buy all of this cash than try to spend 40+ hours getting insurance approval.

I absolutely love how practical and solid your product is. I cannot comment on the pricing because I have no idea what your costs/market is but if my kid needed $999/mo to walk, I would do literally anything to be able to afford it. Hopefully the costs keep coming down for those with a smaller budget. Good luck!

nemo44x(10000) 7 days ago [-]

In today's culture wouldn't this be considered 'ableist'? Aren't we internalizing the idea of being 'disabled' to this kid and thus dehumanizing him by building a machine that 'corrects' his inability to walk?

I've seen similar arguments from the deaf community about hearing restoration as 'correcting' something when nothing is 'wrong'.

FWIW I think this invention is awesome.

CTmystery(10000) 7 days ago [-]

Are you genuinely curious or do you have an axe to grind? Have you seen anyone accuse such an invention as ableist? This is exactly what new technology is supposed to do: make things possible that were previously difficult or impossible. I don't see how any of the culture wars in vogue have anything to do with it.

a_conservative(10000) 6 days ago [-]

I think these kinds of questions are very personal. I had a close family member who couldn't walk. She didn't make her inability to walk a part of her identity, it was just something annoying she had to put up with. I doubt she would have been upset about gaining the ability to walk.

The only time I've heard these types of questions are around people who are deaf. I suspect that the issue is a complex one of language and community. My understanding is that (some?) deaf people consider sign language their primary language, not the native tongue of their country.

It's too bad that the hn downvoters are hitting your post so hard, this could be a useful, informative discussion. It's hard to talk about this kind of thing without crossing political correctness boundaries though.

MitPitt(10000) 7 days ago [-]

By this logic, any medicine is ableist. Are we dehumanizing people with flesh wounds by sewing them up and stopping the bleeding?

ad404b8a372f2b9(10000) 7 days ago [-]

I've been making my own, purely as a pipe-dream to have some hope of walking again, it's not easy but I'm hoping it'll be a better fit than consumer ones, and probably cheaper.

I feel like we've made leaps and bounds in prosthetics this past decade but orthotics and exoskeletons haven't been following the trend for some reason. As an engineer I imagine there are some significant design constraints that make it hard but seeing what we're capable of doing in any other industry I can't help but feel the problem has to be something other than purely technical limitations. The price is also a big issue, 10k for the cheapest exoskeletons, 150k for the one in this article.

hinkley(10000) 7 days ago [-]

We live in a world where prices are usually set by the constraints of mass production. We have trouble accepting the cost of custom-made things. Hearing aids can be ridiculously expensive. Custom keyboards at least are not quite so ridiculous.

Now that some of the earbud manufacturers are dabbling in processing of environmental sound, some people may be able to use consumer grade equipment, particularly younger Boomers and older Gen-Xers, due more to stigma than anything else.

Unfortunately that probably reduces volume of product for everyone else, keeping the mean price roughly the same but jacking up the high end.

the_lonely_road(10000) 7 days ago [-]

My understanding is its the same thing limiting everything else. Power. Batteries more specifically. We just don't have a way to power anything for very long that doesn't have a cable attached to a wall attached to a power station. Honestly the ability for humans (and other living creatures) to just DO SO MUCH on just a little bio mass each day is astounding.

mustafa_pasi(10000) 7 days ago [-]

If you don't already you should document your project on Youtube. It's a super cool project and I'm sure you can even get lots of Patreon sponsors that way and even commercial ones probably.

smoldesu(10000) 7 days ago [-]

My grandpa, who died before I was born, was wheelchair-bound for the second half of his life, and I was told that he handled it well, but internally it took an incredible toll on his body. When he finally passed away, they found that his basement was littered with machinery, engines and pre-cut body panels. Apparently, in the last few years of his life he threw himself into the idea of building his own plane to restore his freedom on his own terms.

Bit of a sad story, but it never fails to motivate me past my usual, procrastinating self.

AutumnCurtain(10000) 7 days ago [-]

I worked in a logistics operation that explored exoskeleton options deeply in 2019. Options were shockingly limited and the best products were not powered in any way - essentially just frames to bolster your body mechanically. Compare to the massive, extremely precise, and power-efficient automated storage and retrieval systems and it just seemed like stone age tech. My takeaway was that powered exoskeleton tech must be vastly more difficult to implement than I'd expected. The financial incentives are there for warehouse and logistics applications, but the tech just isn't.

giantg2(10000) 7 days ago [-]

I've always wanted to make one that fits like spandex and uses things that mimic anatomical function, like nitinol wires to mimic muscle.

justinclift(10000) 7 days ago [-]

What's the control method you've been using? eg for input to direct the actions

namlem(10000) 7 days ago [-]

Some of the unpowered exoskeletons are pretty impressive, and all things considered, $10-20k isn't that crazy a price. After all, they are not really being mass produced, and are pretty complex mechanical systems. I'd love to see a big company like Caterpillar get on board and start pumping them out for construction purposes.

The powered ones are still too expensive for anything but the most niche applications sadly.

dalbasal(10000) 7 days ago [-]

I think both exoskeletons and prosthetics suffer from similar issues. To reach affordability, you need unit volume. To get unit volume, you need an initial user segment capable of paying what such a device currently costs. In computing, for example, there's always someone willing to pay for bleeding edge, which then paves the way to price reduction. With these, there just isn't much daylight between prices where basically no one can afford it, and prices where everyone who needs one can.

People like you are the best hope. Independent inventors aren't bound by manufacturing economics.

Side question: how much power does it take to run existing, mobility oriented exoskeletons?

regularfry(10000) 7 days ago [-]

Size variation, maybe? If you're building an industrial robot you control the size precisely, but if you've got to make everything variable - and not just variable, but to be as comfortable as clothes - I can see that throwing a hell of a lot of complexity into the mix.

seph-reed(10000) 7 days ago [-]

Just for the love of ideating: if ones arms were working, it seems like there may be some way of essentially diverting arm strength between movement, and handling.

Kind of like picking up one leg at a time with your arm, while the other stays locked through some ratchet system.

It's just a hazy idea, but it could definitely help with the power issue. It's pretty much the same principle as a wheel chair in that regard.

agumonkey(10000) 7 days ago [-]

I'm near ready to start a workshop, I think we ought to make these kind of things more prevalent, a lot of money is lost on ultra expensive medical devices with really low value (wheelchairs or similar)

giving someone an exoskeleton might be something much more joyful and inspiring than a wheelchair, it can revive your life

Fiahil(10000) 7 days ago [-]

Have you seen what this, from a few years ago ? Your project is different, but it can be a good inspiration !


beeskneecaps(10000) 7 days ago [-]

This is incredible! There's something really appealing and enabling about becoming an 'exoskeleton pilot'.

imglorp(10000) 7 days ago [-]

I feel like it's time to take back the term 'walker' from the last century.

runawaybottle(10000) 7 days ago [-]

This is kind of dangerous. Something like this that is not tested can pretty much kill you if it fails in anyway. You'll just fall to the ground from about 5 ft, which is often enough to do something bad to your skull.

pomian(10000) 7 days ago [-]

Then don't do it. Everyone has their own level of comfort. It is disappointing to be criticized for awesome innovation by armchair advisors. Innovation and progress is necessary for humanity to continue. Fear, solves nothing. Super impressed by this dad!

ad404b8a372f2b9(10000) 7 days ago [-]

The medical device industry in France is heavily regulated and products are audited before they reach the market so you can be sure it's a risk that's been identified and remediated.

Blackthorn(10000) 7 days ago [-]

A lot of things are dangerous. Some people would be quite willing to accept that risk so they can walk again.

foobiekr(10000) 7 days ago [-]

My mother can no longer walk due to an inoperable bone spur that has destroyed her tendons. She's 75. She's risk death and injury to walk again.

lsllc(10000) 7 days ago [-]

You're not wrong, but if you had some of the tech from the Boston Dynamics robots, you could imagine something smart enough to either retain balance in the first place, or take a forward/backward roll to prevent a more serious injury.

The Father here has done a fantastic job, I'm sure his son is thrilled to be able to take some steps -- also the low-tech tether from above solves the falling problem for the moment which presumably lets him get on with solving other issues.

Well done Dad!

invalidusernam3(10000) 7 days ago [-]

Cars are dangerous yet they're ubiquitous. Risk vs reward I guess

anonAndOn(10000) 7 days ago [-]

What if it falls into a pushup stance? In my fitter years, I could dead drop straight into a pushup.

brodouevencode(10000) 7 days ago [-]

The discipline of engineering is littered with catastrophes throughout history - everything from bridge collapses to cars loosing braking power like in the old Buicks from the 50s. Yet we still keep innovating because we know that despite the risks involved the reward could be immeasurable.

Besides, this dad doing it for his son is probably taking more care and precaution than some nameless/faceless engineer.

EDIT: clarity

giantg2(10000) 7 days ago [-]

A helmet seems like an easy fix for that.

ciupicri(10000) 7 days ago [-]

Isn't he wearing a helmet?

throwaway09223(10000) 7 days ago [-]

If you're worried about this, wait until you hear about the parents who give their kids bicycles and skateboards!

chris_va(10000) 7 days ago [-]

I am not a robotics expert, and I've always wanted to ask one...

Why do these designs always have the actuators mounted directly at the joints?

Why not have some high modulus yarn transfer the joint loads (ala a bike brake cable, say with high modulus aramid?)? One could keep the actuators near the center of mass (or a trolley that follows it around), and keep the suit fairly minimal.

arpafaucon(10000) 6 days ago [-]

Hey ! I am not a robotics expert, but I work at this company (Wandercraft) as software dev, so I know some of the reasons :) First of all I'd say that's because it simpler: we have so much to do that simplicity is a strong decision factor. That way you also reduce the number of moving parts, so maintenance is easier. On the software side you also get the marginal benefit that you keep a 1:1 mapping between joints and actuator (an example with coupled actuators: Cassie, from the Oregon State University). That way you can have different control strategies for the different joints.

jcun4128(10000) 7 days ago [-]

> bike brake cable

Funny you mention that, this guy [1] made an actuator/arm recently doing that technique. Does work but accuracy/flex can be a problem.

[1] https://www.youtube.com/watch?v=ahSS5HUylT8

nobody_nothing(10000) 7 days ago [-]

Just want to flag that, in addition to all this fun and exciting new tech for disabilities, we really do just need better basics.

As the partner of a wheelchair user, these exoskeletons are a great toy to have on the horizon – but the promise of this has been around for a long time, and while the hope of a better future is exciting, we need change on a much quicker time scale than this. (Not saying people shouldn't be working on this – it's rad that they are).

On a day-to-day level, my partner would much more benefit from:

- A better wheelchair (this will mean something different for every body – for them, someone who uses a wheelchair for chronic pain, an exoskeleton wouldn't even help – much more useful would be a wheelchair where every bump of pavement doesn't rattle your whole body).

- More accessible shops and public spaces. We don't really get to go out much – most often blockades include cracked sidewalks, friend's houses that require a step (or stairs), and inaccessible shops.

- Better safety nets for those with disabilities (it's VERY expensive to be disabled in America – consider how much more these exoskeletons will cost than already expensive wheelchairs).

tl;dr These technologies are exciting and deserve attention and energy – just keep in mind that the notion that these emerging technologies will someday wipe out all of wheelchair users' problems in one fell swoop is both false and does nothing for the many millions prevented from participating in society right now. We need better access today.

pomian(10000) 5 days ago [-]

I agree. I think the problem is that most people don't see (why would they?) the problem areas. Wheelchairs! What a pain. Try find one that answers a different need? It's impossible or super expensive. I have been pushing and pulling family members for years, across gravel, snow, rocks, stairs. Idea! Mountain bike. I wanted to find one with mountain bike tires. Seems easy. Not at all. I finally bought a cheapo fat bike mountain bike, removed the wheels with the axles, and mounted them to a normal wheelchair. It's amazing. We go off road. Across rocks and gravel. Into creeks. But you can't buy one.

Historical Discussions: My tiny side project has had more impact than my decade in the software industry (August 01, 2021: 658 points)

(676) My tiny side project has had more impact than my decade in the software industry

676 points 2 days ago by mwilliamson in 10000th position

mike.zwobble.org | Estimated reading time – 4 minutes | comments | anchor

Sunday 1 August 2021 12:55

Way back in 2013, I started mammoth.js, a library that converts Word documents into HTML. It's not a large project - roughly 3,000 lines - nor is it particularly exciting.

I have a strong suspicion, though, that that tiny side project has had more of a positive impact than the entirety of the decade I've spent working as a software developer.

I wrote the original version on a Friday afternoon at work, when I realised some of my colleagues were spending hours and hours each week painstakingly copying text from a Word document into our CMS and formatting it. I wrote a tool to automate the process, taking advantage of our consistent in-house styling to map Word styles to the right CSS classes rather than producing the soup of HTML that Word would produce. It wasn't perfect - my colleagues would normally still have to tweak a few things - but I'd guess it saved them over 90% of the time they were spending before on a menial and repetitive task.

Since it seemed like this was likely a problem that other people had, I made an open source implementation on my own time, first in JavaScript, later with ports to Python and Java. Since then, I've had messages from people telling me how much time it's saved them: perhaps the most heartwarming being from someone telling me that the hours they saved each week were being spent with their son instead.

I don't know what the total amount of time saved is, but I'm almost certain that it's at least hundreds of times more than the time I've spent working on the tool.

Admittedly, I've not really done all that much development on the project in recent years. The stability of the docx format means that the core functionality continues to work without changes, and most people use the same, small subset of features, so adding support for more cases and more features has rapidly diminishing returns. The nature of the project means that I don't actually need to support all that much of docx: since it tries to preserve semantic information by converting Word styles to CSS classes, rather than producing a high fidelity copy in HTML as Word does, it can happily ignore most of the actual details of Word formatting.

By comparison, having worked as a software developer for over a decade, the impact of the stuff I actually got paid to do seems underwhelming.

I've tried to pick companies working on domains that seem useful: developer productivity, treating diseases, education. While my success in those jobs has been variable - in some cases, I'm proud of what I accomplished, in others I'm pretty sure my net effect was, at best, zero - I'd have a tough time saying that the cumulative impact was greater than my little side project.

Sometimes I wonder whether it'd be possible to earn a living off mammoth. Although there's an option to donate - I currently get a grand total of £1.15 a week from regular donations - it's not something I push very hard. There are specific use cases that are more involved that I'll probably never be able to support in my spare time - for instance, support for equations - so potentially there's money to be made there.

I'm not sure it would make me any happier though. If I were a solo developer, I'd probably miss working with other people, and I'm not sure I really have the temperament to do the work to get enough sales to live off.

Somehow, though, it feels like a missed opportunity. Working on tools where the benefit is immediately visible is extremely satisfying, and there's probably plenty of domains where software could still help without requiring machine learning or a high-growth startup backed by venture capital. I'm just not sure what the best way is for someone like me to actually do so.

Topics: Software development

Thoughts? Comments? Feel free to drop me an email at [email protected]. You can also find me on Twitter as @zwobble.

All Comments: [-] | anchor

aaron695(10000) 1 day ago [-]

As an employee your contribution to the world is just a little more than your wage. Which, since you take your wage home, is just a little.

oxmane(10000) 1 day ago [-]

I believe this is incorrect even from the simple economic perspective.

Many companies (BigTech for example) are extremely net-positive financially. This means that the value they gain per employee is much higher than what they pay them.

Taking this further - the customers in many cases (and certainly in ideal cases) are getting more value than the price they are paying (hence they agree to pay), so in a sense you could argue that the value is actually even larger. This of course scales several times when working on some infrastructure (though the effect is very not obvious).

sdevonoes(10000) 2 days ago [-]

Isn't the statement a bit obvious? Most of the tech companies out there were not born to make an impact on society, they were born to make money (which is totally fine). Only a handful of companies can be proud of making money while actually having an impact (good or bad) on society (e.g., Apple). Most of us (us != HN crowd) work for companies in the first group.

On the other hand, the vast majority of side projects have a very different: to have fun and/or being useful. Things that are useful usually make an impact on the society.

wintermutestwin(10000) 2 days ago [-]

All enterprises have multiple impacts on society. Many people want their work to tied to an overall net positive organization. That's why so many companies put energy into saying they are beneficial to society. Too bad those statements don't have to be vetted by an independent and impartial rating agency of some sort.

js8(10000) 2 days ago [-]

> Isn't the statement a bit obvious?

No, it isn't obvious, just like the Earth is not obviously round. Even 40 years ago, the statement wasn't obvious to many public intellectuals in the West, when they were in favor of neoliberal economic policies, under the broad assumption that free markets will automatically bring meritocracy (whatever it is). I think this example is one of many that show that meritocracy doesn't really happen by itself, but requires societal consciousness to be implemented.

georgeburdell(10000) 2 days ago [-]

As a dev at a large company, I'm wondering how the author went about open sourcing the side project. My employment contract stipulates that anything I write for work is owned by the company. This company wouldn't have any motivation to let me open source the work I do, which in my case does not go into any product.

The reason I ask is that I have written, as part of my day job, a scientific library in C# that doesn't appear to have any public equivalent and I know addresses common tasks in the industry. I would love to open source it, if not for beer money, but for visibility to help my career --- I'm at that point where promotions only happen with externally-visible accomplishments.

conductr(10000) 1 day ago [-]

As a hack, could you use the work as inspiration to create a similar port in another language during off time?

mwilliamson(10000) 1 day ago [-]

Author here! The open sourced code was rewritten from scratch, not least because the original version was in C# while the open source version is in JavaScript. So, the same idea, but an entirely new implementation.

I forget exactly what conversations I had, but it was also very clear that they had no problem with me doing so since it doesn't really have any connection with their core business. If I wanted to, I suspect I could have open sourced the original version so long as I stripped out the stuff that was specific to the company.

tyree731(10000) 2 days ago [-]

At my firm you can request that side projects not be subject to the company's copyright agreement. The firm's market is fairly specific, so most requests get approved.

drran(10000) 1 day ago [-]

> My employment contract stipulates that anything I write for work is owned by the company.

Did you agree on the price of that extra work? If not, I recommend to set price prohibitively high, e.g. $100 per line of code or $1000 per hour of work. If your company really want to own your side project, then they will need to pay the requested price.

mwcampbell(10000) 2 days ago [-]

> Sometimes I wonder whether it'd be possible to earn a living off mammoth. Although there's an option to donate - I currently get a grand total of £1.15 a week from regular donations - it's not something I push very hard.

There's no shame in making the source available but using a license that requires payment for commercial use, like the Prosperity [1] license.

[1]: https://prosperitylicense.com/

abiro(10000) 2 days ago [-]

The problem with this approach is that it turns open source adoption into a procurement process: the developer who wants to use your projects needs to go through the legal department etc. So if there is any alternative option, employee devs will avoid a dual-licensed package.

musingsole(10000) 2 days ago [-]

Also nearly impossible to enforce. This approach just adds a layer of guilt/paranoia in the implied legal consequences.

gitgud(10000) 1 day ago [-]

If a library has a restrictive license like that, then it had better be polished and extremely user friendly, or I'm not using it....

Whereas, if it's a permissive license like; MIT, then I'm personally more likely to be much more forgiving and even try and fix problems.

goodpoint(10000) 2 days ago [-]

This license is not written by a legal expert, I suspect.

For example this clause, phrased like an order, does not make sense:

'Don't make any legal claim against anyone accusing this software, with or without changes, alone or with other technology, of infringing any patent.'

You can't give orders to people in a license or other contracts. You can only describe conditions.

tonyedgecombe(10000) 2 days ago [-]

Some people don't want to run a business and that's OK.

DrOctagon(10000) 2 days ago [-]

Quite a few years ago as a (very) junior FE dev I used mammoth.js to automate the generation of a biennial report. Three months had been earmarked for the task as this is what it had taken in previous years (due to bad tools and lack of expertise). I had it done in less than a week by using mammoth. Mike was also very accommodating with questions I'm pleased I cant remember as I'm sure they were embarrassingly simple.

The project still took three months as it was one of those special type of organisations, but it wasn't due to the HTML generation :)

Thank you for mammoth.js!

musingsole(10000) 2 days ago [-]

Your story and mammoth's origin story really highlight how the Sisyphean corporate world is seemingly at odds -- or at least tangential -- to progress.

jacquesm(10000) 2 days ago [-]

Same here, I spent many years on complex and large pieces of software with relatively little impact, and an 'all nighter' pretty much changed the world for real time video on the web. Pretty weird when you consider that it was mostly gluing together pieces that already existed and adding a small HTTP server.

I haven't been able to replicate anything close to that success in all the years since then.

elcomet(10000) 2 days ago [-]

what was the software ?

tppiotrowski(10000) 2 days ago [-]

Companies isolate devs from customers by using Product Managers. PMs interview customers and then decide what to build. By the time the tasks get to the dev it's hard to understand the motivation and impact. The best companies I've worked at put the engineers and customers in close contact so they understood the impact and shortcomings in their work. Alternately you need to foster a culture of shared purpose where you have "faith" that your work has impact.

wreath(10000) 1 day ago [-]

I've seen more engineers completely disinterested in anything customer related (and even calling users stupid!) more than I've seen PMs isolating devs. Every time I ask a PM if I could interact with a customer (not asking for permission, but to hook me up) they were delighted. Every single time.

Another way to solve this problem is to have engineers on customer support rotation of some sort. This way, engineers get to see how their software is used in the wild and interact with customers, and PMs get to see how unrealistic expectations and deadlines comes back to bite you in the ass in a form of your engineers being busy fixing half assed crap.

formerly_proven(10000) 2 days ago [-]

The project I'm currently the 'lead developer' on never had a PM, so we directly communicate with users/customers. Often we are able to implement changes and make them available in a test environment within the hour - people are amazed, because they're used to everything taking days at the very least, usually weeks or months, with a lot of stuff never being fixed and so on.

shoto_io(10000) 2 days ago [-]

I am not sure if it's correct, but I have heard that Apple does not PMs in their main product because of that?

kevinmchugh(10000) 2 days ago [-]

The best PMs I've worked with have very clear understandings of customer workflows and needs (or work towards developing them) and communicate those to the dev team. They also make sure to share customer feedback, whether constructive criticism or excitement/thanks.

It's really motivating to hear customers are happy and I don't know why a PM wouldn't share that good news.

davidivadavid(10000) 2 days ago [-]

Documenting and communicating motivation and impact are often pretty high up the priority list for PMs, otherwise you're just a context-free ticket factory, which typically results in subpar products.

sheetjs(10000) 1 day ago [-]

Our story is very similar. I wrote a small library for converting XLSX and XLS files to CSV. Over the years, that grew into one of the most popular open source libraries on npm/github: https://github.com/SheetJS/sheetjs

Back in 2015, 'patio11 reached out to us. In addition to a structured license purchase, he gave great insights and actually wrote a blog post about the experience https://www.kalzumeus.com/2015/01/28/design-and-implementati...

Today, we offer paid software builds to solve related problems and it allows us to work on SheetJS full-time!

patio11(10000) 1 day ago [-]

I'm thrilled that this is going so well!

tppiotrowski(10000) 2 days ago [-]

I've worked at a VC funded startup that burned $3 million for 2,000 users.

I built a side project that I put on Reddit that got 5,000 hits last month.

The second seems like better ROI

high_byte(10000) 1 day ago [-]

you spent $7.5m just last month?! on a side project!

CharlieMunger(10000) 2 days ago [-]

I am one of dozens of moderators of the Hardcore Berkshire Hathaway subreddit: https://old.reddit.com/r/brkb/

I put in a little effort each day, but thousands of people benefit. Small work, big impact. Contributing on Reddit is an easy way to have a meaningful effect on the world.

p4bl0(10000) 2 days ago [-]

I'm a CS associate professor. I teach a course which consists in contributing to a free software, in third year at my uni. A few years ago a couple of students decided to make a contribution to GNU ls. The change was to have the output color independent of capitalisation (it is based on filename extension). Their code was accepted. It was a tiny tiny contribution, but it's probable that these few lines of code are and will be executed a few thousands times more than all other contributions my other students made.

aiisjustanif(10000) 2 days ago [-]

Glad to see France promoting meaningful experiences for CS students.

matheusmoreira(10000) 1 day ago [-]

Sounds like an amazing class.

inson(10000) 1 day ago [-]

That the best way of teaching CS! Bravo! I wish the US professors have the same attitude like yours.

inamiyar(10000) 2 days ago [-]

That's so cool! Are you able/willing to share the course name/a course page?

iamcreasy(10000) 2 days ago [-]

That's great! Did you guide them through the process? Did you motivate them in any way?

ozarkerD(10000) 2 days ago [-]

Man I wish my college offered courses like these. I don't regret getting my degree but I sure didn't get much out of it besides the piece of paper that made me hire-able in some people's eyes.

antoviaque(10000) 1 day ago [-]

That's great to see this course -- there aren't enough good courses materials on how to contribute to free software, and it can be quite intimidating the first time. Yet, it can definitely be one of the most rewarding development work...

With others from a few large free software projects & communities (Open edX, OpenStack, Framasoft, Mines-Telecom...), we are in the initial stages of producing an online course / MOOC about contributing to free software. If you (or anyone else reading this :) ) are interested in contributing or joining the project at this early stage, please let me know. : ) My email is on my profile.

Presentation site (draft): https://larrybotha.gitlab.io/mooc-floss/ Repository: https://gitlab.com/mooc-floss/mooc-floss

icemelt8(10000) 2 days ago [-]

I think the only other team who has made such a feature is Wordpress contributors, Wordpress offers a way to copy paste word documents in their Text box, and it formats correctly.

Perhaps you can launch commercial license for your library, and license it to famous CMSs, such as Umbraco or Craft. That way you can make a living on it too.

lmz(10000) 2 days ago [-]

They mention docx so I guess it takes the file as input (as opposed to clipboard contents for web JS editors).

codingdave(10000) 2 days ago [-]

A few years ago that would have been true, at least client-side. There have been a variety of server-side options for years, especially if you run Windows as the server OS. More recently editors like CKEditor have vastly improved their Word pasting.

Even so, mammoth.js seems to remain unique in the ability to upload a file in the browser and programmatically retrieve matching HTML. That is why I use it - I can get the HTML, process it, and POST it back to my server already cleaned up and ready for my CMS. The browser does all the heavy lifting, my server remains a basic CRUD app, and I don't have to allow file uploads as it all happens client-side.

Wronnay(10000) 2 days ago [-]

I think many devs have that feeling...

But often simple things have a very big effect - in my first job I made some simple scripts which imported data from machines into an ERP System.

I also made some bigger projects with feature rich GUIs at my first job.

The simple scripts probably sill import data every workday and automate a task previously made by humans since multiple years, some of the GUIs weren't even used daily before I left that job...

So I feel like the simple scripts will be there for a long time and save many work hours while some of the feature rich GUIs probably weren't necessary...

D13Fd(10000) 2 days ago [-]

It's crazy how much time little scripts can save.

10 years ago at my current job I created a script to automate the job of checking a certain website for new data each day. It used to be done by a person who would spend maybe 20-30 minutes checking the site and circulating the info each day. Others would also check the site on their own periodically for faster updates.

The script just checks the site multiple times a day and circulates the results.

Over the course of 10 years, I'd guess that my little script I wrote in maybe 5-10 hours (including some tweaks over time as the site format changed) probably saved in the ballpark of half a million dollars in time spent, based on billing rates.

amelius(10000) 2 days ago [-]

Ok, but then you should also ask: how much of that is my work and would someone else have written the same simple scripts? Perhaps the main value is in the existence of these ERP systems.

From that p.o.v. your feature rich GUIs may be your biggest contribution to society, because that's really work based on your decisions.

flakiness(10000) 2 days ago [-]

Yeah, this happens even within a side project. Smaller trivial ones tend to be found useful, while bigger ambitious one tend to ending up disappointing, even if completed.

At work, giving a spot-time quick help often feels more helpful than pushing through a 'proper' project task.

I suspect this is because large part of the 'work' project is more like a speculative investment than something obviously useful. That is probably OK because finding large-enough-obviously-useful-things is hard. What we tend to overlook is that finding a tiny-but-obviously-useful-thing isn't as hard as it looks. It's just hard to earn enough from it.

snarfy(10000) 2 days ago [-]

My most used program was a 7 byte .COM I wrote with debug.exe. It made the machine reboot. It took about 15 seconds to write. A friend who worked at the college ended up using it in their scripts. They had a way to do it before but it wasn't as reliable as my little program. That college's infrastructure was used as a model for the rest of the district and so my little program spread to the other colleges.

alien_(10000) 2 days ago [-]

Same here, for most of my 15+ years career in IT I've been doing DevOps stuff, mostly writing small scripts and infrastructure code, occasionally hacking on existing projects enough to do drive-by contributions.

About 6 years ago I started AutoSpotting, an open source tool designed to reduce AWS costs by automatically replacing on-demand instances with up to 90% cheaper Spot instances, it was meant to be my playground project for learning golang.

I estimate it saved in aggregate in the tens or maybe even hundreds of millions of dollars and multiple companies have been built around my code or reimplementing the same idea.

It's still a side project that I work on occasionally but at some point I tried to monetize it through support and custom development. I failed to get enough traction to become a full time job, currently make some $400/month from about a dozen users whom I sell precompiled binaries through Patreon.

WrtCdEvrydy(10000) 2 days ago [-]

> I estimate it saved in aggregate in the tens or maybe even hundreds of millions of dollars and multiple companies have been built around my code or reimplementing the same idea.

> It's still a side project that I work on occasionally but at some point I tried to monetize it through support and custom development. I failed to get enough traction to become a full time job, currently make some $400/month from about a dozen users whom I sell precompiled binaries through Patreon.

We've had a lot of issues doing donations for cool projects like this. I'd really like a simple subscription service ala Gumroad so we can sign up for the 'Enterprise' tier. Saving $100k we can totally kick back $5k to the person every month without feeling it.

tmp65535(10000) 2 days ago [-]

Similarly, I've been writing respectable software for decades but I'm fairly certain that my most widely used piece of software, by a wide margin, is a mildly pornographic app (http://driftwheeler.com)

This app, published in 2017, has a continuously growing population of users from all over the world. I get email every day asking whether soft1 is the only server, thanking me, suggesting improvements, etc.

It's ironic, and there is a lesson to be learned here.

secondaryacct(10000) 2 days ago [-]

That porn for males is massively popular ? What a lesson :D Isn't it the whole purpose of the internet ?

alien_(10000) 2 days ago [-]

It's just a numbers game. The total number of people who consume porn is probably many orders of magnitude larger than the audience of your respectable software, so even if you're tapping into a very small niche of porn consumers, it can be enough to overtake your other software.

ushakov(10000) 2 days ago [-]

have you tried working in public sector? i'm proud my software runs in our hospitals and saves hours for doctors, they now spend treating more patients!

codingdave(10000) 2 days ago [-]

I work in the public sector, and mammoth.js has been a huge help in allowing me to create tools for school districts to bring content from Word docs into our tools. So yes.... even if not directly, the OPs work has definitely had an impact in the public sector.

logotype(10000) 2 days ago [-]

I work in finance. About 5 years ago I started building a FIX library on my spare time, out of curiosity. Over the years it has been countless fin tech start-ups as well as big companies reaching out to me about the library, suggesting fixes and features. Since then I have long lost the interest in the technology which enables connectivity to financial exchanges to automate trades, but I keep working on the library for the benefit of others and just the joy of creating something. But what's the actual impact? Enabling companies getting richer? Greed?


qmmmur(10000) 1 day ago [-]

Are you seriously asking that question about people who want to automate trading in the stock market?

agumonkey(10000) 2 days ago [-]

I'm trying to find niches where positive feedback loops have a nicer effect on society (right now trying to help courthouse daily duties for instance)

CydeWeys(10000) 2 days ago [-]

Are your users compensating you well for this? Sounds like the impact you're having is making fintech serious money, so I hope you're getting some of that.

alien_(10000) 2 days ago [-]

I'd probably offer some sort of support/consultancy services and prioritize the work of the paying users, making it clear when filing github issues and pull requests. This could easily sustain you in a passive income way so you can spend most of your time working on things that matter more to humanity.

satellite2(10000) 2 days ago [-]

I've been a user of your project and it's fantastic. I think you're being too hard on yourself.

The proliferation of marketplaces and assets to trade (and all type of projects, automated trading, market making, etc... that your project enabled) can been seen as a net positive, as a win win for all parties involved and more.

From a microeconomic perspective, your library enabled access to startup using js and with relatively simple tech stacks access to a tool that would otherwise be relatively complicated and time consuming and for which they would not necessary have the resouces. By doing so you lowered the cost of entry and allowed new entrants to challenge the status quo.

And from a macroeconomic perspective I think it's one of the reason access to capital has been simplified and has lowered in cost, it's one of the reason for the low cost of mortgages and the relative resilience of economies during covid (central banks interventions are useless without bank and other financial intermediaries participation, it would be like pushing on a string).

Don't be fooled by the contemporary contempt against finance. It's still the most important reason that explain the constant improvmements in prosperity and peace of recent years, and the best thing one can do is to make it less arcane and open it to more people. One of the achievements of your tool.

You can be proud of your work.

boulos(10000) 2 days ago [-]

Huh! You left out the most funky part: you wrote a FIX parser in Typescript.

What are people using it for? To make web apps and just not have to roll their own wrappers around FIX/FIX-derived data? (That is, if all back ends are used to speaking FIX, it's nicer to have the front end and web serving tools also just speak it?).

bigyikes(10000) 2 days ago [-]

I don't think you have any obligation to continue supporting the software, but give yourself more credit! You're making dozens (hundreds? thousands?) of developers lives significantly better. You're helping bring the finance industry out of the stone age. You've created something useful and valuable, which is no small feat.

jjeaff(10000) 1 day ago [-]

There is a common refrain in charitable organizations that are focused on helping the poor:

'Sometimes you have to feed the greedy to get to the truly needy.'

In other words, the greedy will always take advantage, but that doesn't diminish the help you give to the needy. There is simply no way to totally filter for the greedy.

This doesn't really apply directly to your situation, but at least for every greedy company that benefits from free products like yours, there are a lot of jobs created and benefit to society as a whole when these companies create useful products.

mech4bg(10000) 1 day ago [-]

This is amazing. I now work in consumer SAAS web development, but I worked for a long time in finance and for 7 years of that exclusively focused on FIX, writing a real-time, highly performant market interface. I would have _loved_ to have had something like this while I was working on it. Have you thought of commercializing it? Do you get to use it in your day-to-day work?

high_byte(10000) 2 days ago [-]

if you want to erase billions and stop the richness just remove the library, or better yet add a typo :D

Historical Discussions: Faster CRDTs: An Adventure in Optimization (July 31, 2021: 664 points)

(671) Faster CRDTs: An Adventure in Optimization

671 points 3 days ago by xnx in 10000th position

josephg.com | Estimated reading time – 51 minutes | comments | anchor

5000x faster CRDTs: An Adventure in Optimization

July 31 2021

A few years ago I was really bothered by an academic paper.

Some researchers in France put together a comparison showing lots of ways you could implement realtime collaborative editing (like Google Docs). They implemented lots of algorithms - CRDTs and OT algorithms and stuff. And they benchmarked them all to see how they perform. (Cool!!) Some algorithms worked reasonably well. But others took upwards of 3 seconds to process simple paste operations from their editing sessions. Yikes!

Which algorithm was that? Well, this is awkward but .. it was mine. I mean, I didn't invent it - but it was the algorithm I was using for ShareJS. The algorithm we used for Google Wave. The algorithm which - hang on - I knew for a fact didn't take 3 seconds to process large paste events. Whats going on here?

I took a closer look at the paper. In their implementation when a user pasted a big chunk of text (like 1000 characters), instead of creating 1 operation with 1000 characters, their code split the insert into 1000 individual operations. And each of those operations needed to be processed separately. Do'h - of course it'll be slow if you do that! This isn't a problem with the operational transformation algorithm. This is just a problem with their particular implementation.

The infuriating part was that several people sent me links to the paper and (pointedly) asked me what I think about it. Written up as a Published Science Paper, these speed comparisons seemed like a Fact About The Universe. And not what they really were - implementation details of some java code, written by a probably overstretched grad student. One of a whole bunch of implementations that they needed to code up.

'Nooo! The peer reviewed science isn't right everybody! Please believe me!'. But I didn't have a published paper justifying my claims. I had working code but it felt like none of the smart computer science people cared about that. Who was I? I was nobody.

Even talking about this stuff we have a language problem. We describe each system as an 'algorithm'. Jupiter is an Algorithm. RGA is an Algorithm. But really there are two very separate aspects:

  1. The black-box behaviour of concurrent edits. When two clients edit the same region of text at the same time, what happens? Are they merged, and if so in what order? What are the rules?
  2. The white-box implementation of the system. What programming language are we using? What data structures? How well optimized is the code?

If some academic's code runs slowly, what does that actually teach us? Maybe it's like tests. A passing test suite suggests, but can never prove that there are no bugs. Likewise a slow implementation suggests, but can never prove that every implementation of the system will be slow. If you wait long enough, somebody will find more bugs. And, maybe, someone out there can design a faster implementation.

Years ago I translated my old text OT code into C, Javascript, Go, Rust and Swift. Each implementation has the same behaviour, and the same algorithm. But the performance is not even close. In javascript my transform function ran about 100 000 times per second. Not bad! But the same function in C does 20M iterations per second. That's 200x faster. Wow!

Were the academics testing a slow version or the fast version of this code? Maybe, without noticing, they had fast versions of some algorithms and slow versions of others. It's impossible to tell from the paper!

Making CRDTs fast

So as you may know, I've been getting interested in CRDTs lately. For the uninitiated, CRDTs (Conflict-Free Replicated Data types) are fancy programming tools which let multiple users edit the same data at the same time. They let you work locally with no lag. (You don't even have to be online). And when you do sync up with other users & devices, everything just magically syncs up and becomes eventually consistent. The best part of CRDTs is that they can do all that without even needing a centralized computer in the cloud to monitor and control everything.

I want Google Docs without google. I want my apps to seamlessly share data between all my devices, without me needing to rely on some flakey startup's servers to still be around in another decade. I think they're the future of collaborative editing. And maybe the future of all software - but I'm not ready to talk about that yet.

But most CRDTs you read about in academic papers are crazy slow. A decade ago I decided to stop reading academic papers and dismissed them. I assumed CRDTs had some inherent problem. A GUID for every character? Nought but madness comes from those strange lands! But - and this is awkward to admit - I think I've been making the same mistake as those researchers. I was reading papers which described the behaviour of different systems. And I assumed that meant we knew how the best way to implement those systems. And wow, I was super wrong.

How wrong? Well. Running this editing trace, Automerge (a popular CRDT, written by a popular researcher) takes nearly 5 minutes to run. I have a new implementation that can process the same editing trace in 56 milliseconds. Thats 0.056 seconds, which is over 5000x faster. It's the largest speed up I've ever gotten from optimization work - and I'm utterly delighted by it.

Lets talk about why automerge is currently slow, and I'll take you through all the steps toward making it super fast.

Wait, no. First we need to start with:

What is automerge?

Automerge is a library to help you do collaborative editing. It's written by Martin Kleppmann, who's a little bit famous from his book and excellent talks. Automerge is based on an algorithm called RGA, which you can read about in an academic paper if you're into that sort of thing.

Martin explains automerge far better than I will in this talk from 2020:

Automerge (and Yjs and other CRDTs) think of a shared document as a list of characters. Each character in the document gets a unique ID, and whenever you insert into the document, you name what you're inserting after.

Imagine I type 'abc' into an empty document. Automerge creates 3 items:

  • Insert 'a' id (seph, 0) after ROOT
    • Insert 'b' id (seph, 1) after (seph, 0)
      • Insert 'c' id (seph, 2) after (seph, 1)

We can draw this as a tree!

Lets say Mike inserts an 'X' between a and b, so we get 'aXbc'. Then we have:

  • Insert 'a' id (seph, 0) after ROOT
    • Insert 'X' id (mike, 0) after (seph, 0)
    • Insert 'b' id (seph, 1) after (seph, 0)
      • Insert 'c' id (seph, 2) after (seph, 1)

Note the 'X' and 'b' both share the same parent. This will happen when users type concurrently in the same location in the document. But how do we figure out which character goes first? We could just sort using their agent IDs or something. But argh, if we do that the document could end up as abcX, even though Mike inserted X before the b. That would be really confusing.

Automerge (RGA) solves this with a neat hack. It adds an extra integer to each item called a sequence number. Whenever you insert something, you set the new item's sequence number to be 1 bigger than the biggest sequence number you've ever seen:

  • Insert 'a' id (seph, 0) after ROOT, seq: 0
    • Insert 'X' id (mike, 0) after (seph, 0), seq: 3
    • Insert 'b' id (seph, 1) after (seph, 0), seq: 1
      • Insert 'c' id (seph, 2) after (seph, 1), seq: 2

This is the algorithmic version of 'Wow I saw a sequence number, and it was this big!' 'Yeah? Mine is even bigger!'

The rule is that children are sorted first based on their sequence numbers (bigger sequence number first). If the sequence numbers match, the changes must be concurrent. In that case we can sort them arbitrarily based on their agent IDs. (We do it this way so all peers end up with the same resulting document.)

Yjs - which we'll see more of later - implements a CRDT called YATA. YATA is identical to RGA, except that it solves this problem with a slightly different hack. But the difference isn't really important here.

Automerge (RGA)'s behaviour is defined by this algorithm:

  • Build the tree, connecting each item to its parent
  • When an item has multiple children, sort them by sequence number then by their ID.
  • The resulting list (or text document) can be made by flattening the tree with a depth-first traversal.

So how should you implement automerge? The automerge library does it in the obvious way, which is to store all the data as a tree. (At least I think so - after typing 'abc' this is automerge's internal state. Uh, uhm, I have no idea whats going on here. And what are all those Uint8Arrays doing all over the place? Whatever.) The automerge library works by building a tree of items.

For a simple benchmark, I'm going to test automerge using an editing trace Martin himself made. This is a character by character recording of Martin typing up an academic paper. There aren't any concurrent edits in this trace, but users almost never actually put their cursors at exactly the same place and type anyway, so I'm not too worried about that. I'm also only counting the time taken to apply this trace locally, which isn't ideal but it'll do. Kevin Jahns (Yjs's author) has a much more extensive benchmarking suite here if you're into that sort of thing. All the benchmarks here are done on my chonky ryzen 5800x workstation, with Nodejs v16.1 and rust 1.52 when that becomes appropriate. (Spoilers!)

The editing trace has 260 000 edits, and the final document size is about 100 000 characters.

As I said above, automerge takes a little under 5 minutes to process this trace. Thats just shy of 900 edits per second, which is probably fine. But by the time it's done, automerge is using 880 MB of RAM. Whoa! That's 10kb of ram per key press. At peak, automerge was using 2.6 GB of RAM!

To get a sense of how much overhead there is, I'll compare this to a baseline benchmark where we just splice all the edits directly into a javascript string. This throws away all the information we need to do collaborative editing, but it gives us a sense of how fast javascript is capable of going. It turns out javascript running on V8 is fast:

Test Time taken RAM usage
automerge (v1.0.0-preview2) 291s 880 MB
Plain string edits in JS 0.61s 0.1 MB

This is a chart showing the time taken to process each operation throughout the test, averaged in groups of 1000 operations. I think those spikes are V8's garbage collector trying to free up memory.

In the slowest spike near the end, a single edit took 1.8 seconds to process. Oof. In a real application, the whole app (or browser tab) would freeze up for a couple of seconds sometimes while you're in the middle of typing.

The chart is easier to read when we average everything out a bit and zoom the Y axis. We can see the average performance gets gradually (roughly linearly) worse over time.

Why is automerge slow though?

Automerge is slow for a whole slew of reasons:

  1. Automerge's core tree based data structure gets big and slow as the document grows.
  2. Automerge makes heavy use of Immutablejs. Immutablejs is a library which gives you clojure-like copy-on-write semantics for javascript objects. This is a cool set of functionality, but the V8 optimizer & GC struggles to optimize code that uses immutablejs. As a result, it increases memory usage and decreases performance.
  3. Automerge treats each inserted character as a separate item. Remember that paper I talked about earlier, where copy+paste operations are slow? Automerge does that too!

Automerge was just never written with performance in mind. Their team is working on a replacement rust implementation of the algorithm to run through wasm, but at the time of writing it hasn't landed yet. I got the master branch working, but they have some kinks to work out before it's ready. Switching to the automerge-rs backend doesn't make average performance in this test any faster. (Although it does halve memory usage and smooth out performance.)

There's an old saying with performance tuning:

You can't make the computer faster. You can only make it do less work.

How do we make the computer do less work here? There's lots of performance wins to be had from going through the code and improving lots of small things. But the automerge team has the right approach. It's always best to start with macro optimizations. Fix the core algorithm and data structures before moving to optimizing individual methods. There's no point optimizing a function when you're about to throw it away in a rewrite.

By far, Automerge's biggest problem is its complex tree based data structure. And we can replace it with something faster.

Improving the data structure

Luckily, there's a better way to implement CRDTs, pioneered in Yjs. Yjs is another (competing) opensource CRDT implementation made by Kevin Jahns. It's fast, well documented and well made. If I were going to build software which supports collaborative editing today, I'd use Yjs.

Yjs doesn't need a whole blog post talking about how to make it fast because it's already pretty fast, as we'll see soon. It got there by using a clever, obvious data structure 'trick' that I don't think anyone else in the field has noticed. Instead of implementing the CRDT as a tree like automerge does:

Yjs just puts all the items in a single flat list:

That looks simple, but how do you insert a new item into a list? With automerge it's easy:

  1. Find the parent item
  2. Insert the new item into the right location in the parents' list of children

But with this list approach it's more complicated:

  1. Find the parent item
  2. Starting right after the parent item, iterate through the list until we find the location where the new item should be inserted (?)
  3. Insert it there, splicing into the array

Essentially, this approach is just a fancy insertion sort. We're implementing a list CRDT with a list. Genius!

This sounds complicated - how do you figure out where the new item should go? But it's complicated in the same way math is complicated. It's hard to understand, but once you understand it, you can implement the whole insert function in about 20 lines of code:

(But don't be alarmed if this looks confusing - we could probably fit everyone on the planet who understands this code today into a small meeting room.)

I implemented both Yjs's CRDT (YATA) and Automerge using this approach in my experimental reference-crdts codebase. Here's the insert function, with a few more comments. The Yjs version of this function is in the same file, if you want to have a look. Despite being very different papers, the logic for inserting is almost identical. And even though my code is very different, this approach is semantically identical to the actual automerge, and Yjs and sync9 codebases. (Fuzzer verified (TM)).

If you're interested in going deeper on this, I gave a talk about this approach at a braid meeting a few weeks ago.

The important point is this approach is better:

  1. We can use a flat array to store everything, rather than an unbalanced tree. This makes everything smaller and faster for the computer to process.
  2. The code is really simple. Being faster and simpler moves the Pareto efficiency frontier. Ideas which do this are rare and truly golden.
  3. You can implement lots of CRDTs like this. Yjs, Automerge, Sync9 and others work. You can implement many list CRDTs in the same codebase. In my reference-crdts codebase I have an implementation of both RGA (automerge) and YATA (Yjs). They share most of their code (everything except this one function) and their performance in this test is identical.

Theoretically this algorithm can slow down when there are concurrent inserts in the same location in the document. But that's really rare in practice - you almost always just insert right after the parent item.

Using this approach, my implementation of automerge's algorithm is about 10x faster than the real automerge. And it's 30x more memory-efficient:

Test Time taken RAM usage
automerge (v1.0.0-preview2) 291s 880 MB
reference-crdts (automerge / Yjs) 31s 28 MB
Plain string edits in JS 0.61s 0.1 MB

I wish I could attribute all of that difference to this sweet and simple data structure. But a lot of the difference here is probably just immutablejs gumming automerge up.

It's a lot faster than automerge:

Death by 1000 scans

We're using a clean and fast core data abstraction now, but the implementation is still not fast. There are two big performance bottlenecks in this codebase we need to fix:

  1. Finding the location to insert, and
  2. Actually inserting into the array

(These lines are marked (1) and (2) in the code listing above).

To understand why this code is necessary, lets say we have a document, which is a list of items.

And some of those items might have been deleted. I've added an isDeleted flag to mark which ones. (Unfortunately we can't just remove them from the array because other inserts might depend on them. (Drat! But that's a problem for another day.)

Imagine the document has 150 000 array items in it, representing 100 000 characters which haven't been deleted. If the user types an 'a' in the middle of the document (at document position 50 000), what index does that correspond to in our array? To find out, we need to scan through the document (skipping deleted items) to figure out the right array location.

So if the user inserts at position 50 000, we'll probably have to linearly scan past 75 000 items or something to find the insert position. Yikes!

And then when we actually insert, the code does this, which is double yikes:

If the array currently has 150 000 items, javascript will need to move every single item after the new item once space forward in the array. This part happens in native code, but it's still probably slow when we're moving so many items. (Aside: V8 is actually suspiciously fast at this part, so maybe v8 isn't using an array internally to implement Arrays? Who knows!)

But in general, inserting an item into a document with n items will take about n steps. Wait, no - it's worse than that because deleted items stick around. Inserting into a document where there have ever been n items will take n steps. This algorithm is reasonably fast, but it gets slower with every keystroke. Inserting n characters will take O(n^2).

You can see this if we zoom in on the diagram above. There's a lot going on here because Martin's editing position bounced around the document. But there's a strong linear trend up and to the right, which is what we would expect when inserts take O(n) time:

And why this shape in particular? And why does performance get better near the end? If we simply graph where each edit happened throughout the editing trace, with the same bucketing and smoothing, the result is a very familiar curve:

It looks like the time spent applying changes is dominated by the time it takes to scan through the document's array.

Changing the data structure

Can we fix this? Yes we can! And by 'we', I mean Kevin fixed these problems in Yjs. How did he manage that?

So remember, there are two problems to fix:

  1. How do we find a specific insert position?
  2. How do we efficiently insert content at that location?

Kevin solved the first problem by thinking about how humans actually edit text documents. Usually while we're typing, we don't actually bounce around a document very much. Rather than scanning the document each time an edit happens, Yjs caches the last (index, position) pair where the user made an edit. The next edit will probably be pretty close to the previous edit, so Kevin just scans forwards or backwards from the last editing position. This sounds a little bit dodgy to me - I mean, thats a big assumption to make! What if edits happen randomly?! But people don't actually edit documents randomly, so it works great in practice.

(What if two users are editing different parts of a document at the same time? Yjs actually stores a whole set of cached locations, so there's almost always a cached cursor location near each user no matter where they're making changes in the document.)

Once Yjs finds the target insert location, it needs to insert efficiently, without copying all the existing items. Yjs solves that by using a bidirectional linked list instead of an array. So long as we have an insert position, linked lists allow inserts in constant time.

Yjs does one more thing to improve performance. Humans usually type in runs of characters. So when we type 'hello' in a document, instead of storing:

Yjs just stores:

Finally those pesky paste events will be fast too!

This is the same information, just stored more compactly. Unfortunately we can't collapse the whole document into a single item or something like that using this trick. The algorithm can only collapse inserts when the IDs and parents line up sequentially - but that happens whenever a user types a run of characters without moving their cursor. And that happens a lot.

In this data set, using spans reduces the number of array entries by 14x. (180k entries down to 12k).

How fast is it now? This blows me away - Yjs is 30x faster than my reference-crdts implementation in this test. And it only uses about 10% as much RAM. It's 300x faster than automerge!.

Test Time taken RAM usage
automerge (v1.0.0-preview2) 291s 880 MB
reference-crdts (automerge / Yjs) 31s 28 MB
Yjs (v13.5.5) 0.97s 3.3 MB
Plain string edits in JS 0.61s 0.1 MB

Honestly I'm shocked and a little suspicious of how little ram Yjs uses in this test. I'm sure there's some wizardry in V8 making this possible. It's extremely impressive.

Kevin says he wrote and rewrote parts of Yjs 12 times in order to make this code run so fast. If there was a programmer version of the speedrunning community, they would adore Kevin. I can't even put Yjs on the same scale as the other algorithms because it's so fast:

If we isolate Yjs, you can see it has mostly flat performance. Unlike the other algorithms, it doesn't get slower over time, as the document grows:

But I have no idea what those spikes are near the end. They're pretty small in absolute terms, but it's still weird! Maybe they happen when the user moves their cursor around the document? Or when the user deletes chunks? I have no idea.

This is neat, but the real question is: Can we go even faster? Honestly I doubt I can make pure javascript run this test any faster than Kevin managed here. But maybe.. just maybe we can be...

Faster than Javascript

When I told Kevin that I thought I could make a CRDT implementation that's way faster than Yjs, he didn't believe me. He said Yjs was already so well optimized, going a lot faster probably wasn't possible. 'Maybe a little faster if you just port it to Rust. But not a lot faster! V8 is really fast these days!!'

But I knew something Kevin didn't know: I knew about memory fragmentation and cache coherency. Rust isn't just faster. It's also a lower level language, and that gives us the tools we need to control allocations and memory layout.

Kevin knows this now too, and he's working on Yrs to see if he can claim the performance crown back.

Imagine one of our document items in javascript:

This object is actually a mess like this in memory:

Bad news: Your computer hates this.

This is terrible because all the data is fragmented. It's all separated by pointers.

And yes, I know, V8 tries its hardest to prevent this sort of thing when it can. But its not magic.

To arrange data like this, the computer has to allocate memory one by one for each item. This is slow. Then the garbage collector needs extra data to track all of those objects, which is also slow. Later we'll need to read that data. To read it, your computer will often need to go fetch it from main memory, which - you guessed it - is slow as well.

How slow are main memory reads? At human scale each L1 cache read takes 0.5 seconds. And a read from main memory takes close to 2 minutes! This is the difference between a single heartbeat, and the time it takes to brush your teeth.

Arranging memory like javascript does would be like writing a shopping list. But instead of 'Cheese, Milk, Bread', your list is actually a scavenger hunt: 'Under the couch', 'On top of the fridge', and so on. Under the couch is a little note mentioning you need toothpaste. Needless to say, this makes doing the grocery shopping a lot of work.

To go faster, we need to squish all the data together so the computer can fetch more information with each read of main memory. (We want a single read of my grocery list to tell us everything we need to know). Linked lists are rarely used in the real world for exactly this reason - memory fragmentation ruins performance. I also want to move away from linked lists because the user does sometimes hop around the document, which in Yjs has a linear performance cost. Thats probably not a big deal in text editing, but I want this code to be fast in other use cases too. I don't want the program to ever need those slow scans.

We can't fix this in javascript. The problem with fancy data structures in javascript is that you end up needing a lot of exotic objects (like fixed size arrays). All those extra objects make fragmentation worse, so as a result of all your work, your programs often end up running slower anyway. This is the same limitation immutablejs has, and why its performance hasn't improved much in the decade since it was released. The V8 optimizer is very clever, but it's not magic and clever tricks only get us so far.

But we're not limited to javascript. Even when making webpages, we have WebAssembly these days. We can code this up in anything.

To see how fast we can really go, I've been quietly building a CRDT implementation in rust called Diamond types. Diamond is almost identical to Yjs, but it uses a range tree instead of a linked list internally to store all of the items.

Under the hood, my range tree is just a slightly modified b-tree. But usually when people talk about b-trees they mean a BTreeMap. Thats not what I'm doing here. Instead of storing keys, each internal node of the b-tree stores the total number of characters (recursively) in that item's children. So we can look up any item in the document by character position, or insert or delete anywhere in the document in log(n) time.

This example shows the tree storing a document which currently has 1000 characters:

This is a range tree, right? The wikipedia article on range trees is a pretty weak description of what I'm doing here.

This solves both of our linear scanning problems from earlier:

  • When we want to find the item at position 200, we can just traverse across and down the tree. In the example above, the item with position 350 must be in the middle leaf node here. Trees are very tidy - we can store Martin's editing trace in just 3 levels in our tree, which means in this benchmark we can find any item in about 3 reads from main memory. In practice, most of these reads will already be in your CPU's cache.
  • Updating the tree is fast too. We update a leaf, then update the character counts at its parent, and its parent, all the way up to the root. So again, after 3 or so steps we're done. Much better than shuffling everything in a javascript array.

We never merge edits from remote peers in this test, but I made that fast too anyway. When merging remote edits we also need to find items by their ID (eg ['seph', 100]). Diamond has little index to search the b-tree by ID. That codepath doesn't get benchmarked here though. It's fast but for now you'll have to take my word for it.

I'm not using Yjs's trick of caching the last edit location - at least not yet. It might help. I just haven't tried it yet.

Rust gives us total control over the memory layout, so we can pack everything in tightly. Unlike in the diagram, each leaf node in my b-tree stores a block of 32 entries, packed in a fixed size array in memory. Inserting with a structure like this results in a little bit of memcpy-ing, but a little bit of memcpy is fine. Memcpy is always faster than I think it will be - CPUs can copy several bytes per clock cycle. Its not the epic hunt of a main memory lookup.

And why 32 entries? I ran this benchmark with a bunch of different bucket sizes and 32 worked well. I have no idea why that worked out to be the best.

Speaking of fast, how fast does it go?

If we compile this code to webassembly and drive it from javascript like in the other tests, we can now process the whole editing trace in 193 milliseconds. Thats 5x faster than Yjs. And remarkably 3x faster than our baseline test editing a native javascript string, despite doing all the work to support collaborative editing!

Javascript and WASM is now a bottleneck. If we skip javascript and run the benchmark directly in rust, we can process all 260k edits in this editing trace in just 56 milliseconds. That's over 5000x faster than where we started with automerge. It can process 4.6 million operations every second.

Test Time taken RAM usage
automerge (v1.0.0-preview2) 291s 880 MB
reference-crdts (automerge / Yjs) 31s 28 MB
Yjs (v13.5.5) 0.97s 3.3 MB
Plain string edits in JS 0.61s 0.1 MB
Diamond (wasm via nodejs) 0.19s ???
Diamond (native) 0.056s 1.1 MB

Performance is smooth as butter. A b-tree doesn't care where edits happen. This system is uniformly fast across the whole document. Rust doesn't need a garbage collector to track memory allocations, so there's no mysterious GC spikes. And because memory is so tightly packed, processing this entire data set (all 260 000) only results in 1394 calls to malloc.

Oh, what a pity. Its so fast you can barely see it next to yjs (fleexxxx). Lets zoom in a bit there and bask in that flat line:

Well, a nearly flat line.

And remember, this chart shows the slow version. This chart is generated from javascript, calling into rust through WASM. If I run this benchmark natively its another ~4x faster again.

Why is WASM 4x slower than native execution? Are javascript calls to the WASM VM really that slow? Does LLVM optimize native x86 code better? Or do WASM's memory bounds checks slow it down? I'm so curious!

Struct of arrays or Array of structs?

This implementation has another small, important change - and I'm not sure if I like it.

In rust I'm actually doing something like this:

Notice the document's text content doesn't live in the list of items anymore. Now it's in a separate data structure. I'm using a rust library for this called Ropey. Ropey implements another b-tree to efficiently manage just the document's text content.

This isn't universally a win. We have unfortunately arrived at the Land of Uncomfortable Engineering Tradeoffs:

  • Ropey can to do text-specific byte packing. So with ropey, we use less RAM.
  • When inserting we need to update 2 data structures instead of 1. This makes everything more than twice as slow, and it makes the wasm bundle twice as big (60kb -> 120kb).
  • For lots of use cases we'll end up storing the document content somewhere else anyway. For example, if you hook this CRDT up to VS Code, the editor will keep a copy of the document at all times anyway. So there's no need to store the document in my CRDT structures as well, at all. This implementation approach makes it easy to just turn that part of the code off.

So I'm still not sure whether I like this approach.

But regardless, my CRDT implementation is so fast at this point that most of the algorithm's time is spent updating the document contents in ropey. Ropey on its own takes 29ms to process this editing trace. What happens if I just ... turn ropey off? How fast can this puppy can really go?

Test Time taken RAM usage Data structure
automerge (v1.0.0-preview2) 291s 880 MB Naive tree
reference-crdts (automerge / Yjs) 31s 28 MB Array
Yjs (v13.5.5) 0.97s 3.3 MB Linked list
Plain string edits in JS 0.61s 0.1 MB (none)
Diamond (wasm via nodejs) 0.20s ??? B-Tree
Diamond (native) 0.056s 1.1 MB B-Tree
Ropey (rust) baseline 0.029s 0.2 MB (none)
Diamond (native, no doc content) 0.023s 0.96 MB B-Tree

Boom. This is kind of useless, but it's now 14000x faster than automerge. We're processing 260 000 operations in 23ms. Thats 11 million operations per second. I could saturate my home internet connection with keystrokes and I'd still have CPU to spare.

We can calculate the average speed each algorithm processes edits:

But these numbers are misleading. Remember, automerge and ref-crdts aren't steady. They're fast at first, then slow down as the document grows. Even though automerge can process about 900 edits per second on average (which is fast enough that users won't notice), the slowest edit during this benchmark run stalled V8 for a full 1.8 seconds.

We can put everything in a single, pretty chart if I use a log scale. It's remarkable how tidy this looks:

Huh - look at the bottom two lines. The jitteryness of yjs and diamond mirror each other. Periods when yjs gets slower, diamond gets faster. I wonder whats going on there!

But log scales are junk food for your intuition. On a linear scale the data looks like this:

That, my friends, is how you make the computer do a lot less work.


That silly academic paper I read all those years ago says some CRDTs and OT algorithms are slow. And everyone believed the paper, because it was Published Science. But the paper was wrong. As I've shown, we can make CRDTs fast. We can make them crazy fast if we get creative with our implementation strategies. With the right approach, we can make CRDTs so fast that we can compete with the performance of native strings. The performance numbers in that paper weren't just wrong. They were 'a billionaire guessing a banana costs $1000' kind of wrong.

But you know what? I sort of appreciate that paper now. Their mistake is ok. It's human. I used to feel inadequate around academics - maybe I'll never be that smart! But this whole thing made me realise something obvious: Scientists aren't gods, sent from the heavens with the gift of Truth. No, they're beautiful, flawed people just like the rest of us mooks. Great at whatever we obsess over, but kind of middling everywhere else. I can optimize code pretty well, but I still get zucchini and cucumber mixed up. And, no matter the teasing I get from my friends, thats ok.

A decade ago Google Wave really needed a good quality list CRDT. I got super excited when the papers for CRDTs started to emerge. LOGOOT and WOOT seemed like a big deal! But that excitement died when I realised the algorithms were too slow and inefficient to be practically useful. And I made a big mistake - I assumed if the academics couldn't make them fast, nobody could.

But sometimes the best work comes out of a collaboration between people with different skills. I'm terrible at academic papers, I'm pretty good at making code run fast. And yet here, in my own field, I didn't even try to help. The researchers were doing their part to make P2P collaborative editing work. And I just thumbed my nose at them all and kept working on Operational Transform. If I helped out, maybe we would have had fast, workable CRDTs for text editing a decade ago. Oops! It turned out collaborative editing needed a collaboration between all of us. How ironic! Who could have guessed?!

Well, it took a decade, some hard work and some great ideas from a bunch of clever folks. The binary encoding system Martin invented for Automerge is brilliant. The system of avoiding UUIDs by using incrementing (agent id, sequence) tuples is genius. I have no idea who came up with that, but I love it. And of course, Kevin's list representation + insertion approach I describe here makes everything so much faster and simpler. I bet 100 smart people must have walked right past that idea over the last decade without any of them noticing it. I doubt I would have thought of it either. My contribution is using run-length encoded b-trees and clever indexing. And showing Kevin's fast list representation can be adapted to any CRDT algorithm. I don't think anyone noticed that before.

And now, after a decade of waiting, we finally figured out how to make fast, lightweight list CRDT implementations. Practical decentralized realtime collaborative editing? We're coming for you next.

Appendix A: I want to use a CRDT for my application. What should I do?

If you're building a document based collaborative application today, you should use Yjs. Yjs has solid performance, low memory usage and great support. If you want help implementing Yjs in your application, Kevin Jahns sometimes accepts money in exchange for help integrating Yjs into various applications. He uses this to fund working on Yjs (and adjacent work) full time. Yjs already runs fast and soon it should become even faster.

The automerge team is also fantastic. I've had some great conversations with them about these issues. They're making performance the #1 issue of 2021 and they're planning on using a lot of these tricks to make automerge fast. It might already be much faster by the time you're reading this.

Diamond is really fast, but there's a lot of work before I have feature parity with Yjs and Automerge. There is a lot more that goes into a good CRDT library than operation speed. CRDT libraries also need to support binary encoding, network protocols, non-list data structures, presence (cursor positions), editor bindings and so on. At the time of writing, diamond does almost none of this.

If you want database semantics instead of document semantics, as far as I know nobody has done this well on top of CRDTs yet. You can use ShareDB, which uses OT. I wrote ShareDB years ago, and it's well used, well maintained and battle tested.

Looking forward, I'm excited for Redwood - which supports P2P editing and has planned full CRDT support.

Appending B: Lies, damned lies and benchmarks

Is this for real? Yes. But performance is complicated and I'm not telling the full picture here.

First, if you want to play with any of the benchmarks I ran yourself, you can. But everything is a bit of a mess.

The benchmark code for the JS plain string editing baseline, Yjs, automerge and reference-crdts tests is all in this github gist. It's a mess; but messy code is better than missing code.

You'll also need automerge-paper.json.gz from josephg/crdt-benchmarks in order to run most of these tests. The reference-crdts benchmark depends on crdts.ts from josephg/reference-crdts, at this version.

Diamond's benchmarks come from josephg/diamond-types, at this version. Benchmark by running RUSTFLAGS='-C target-cpu=native' cargo criterion yjs. The inline rope structure updates can be enabled or disabled by editing the constant at the top of src/list/doc.rs. You can look at memory statistics by running cargo run --release --features memusage --example stats.

Diamond is compiled to wasm using this wrapper, hardcoded to point to a local copy of diamond-types from git. The wasm bundle is optimized with wasm-opt.

The charts were made on ObservableHQ.

Are Automerge and Yjs doing the same thing?

Throughout this post I've been comparing the performance of implementations of RGA (automerge) and YATA (Yjs + my rust implementation) interchangeably.

Doing this rests on the assumption that the concurrent merging behaviour for YATA and RGA are basically the same, and that you can swap between CRDT behaviour without changing your implementation, or your implementation performance. This is a novel idea that I think nobody has looked at before.

I feel confident in this claim because I demonstrated it in my reference CRDT implementation, which has identical performance (and an almost identical codepath) when using Yjs or automerge's behaviour. There might be some performance differences with conflict-heavy editing traces - but that's extremely rare in practice.

I'm also confident you could modify Yjs to implement RGA's behaviour if you wanted to, without changing Yjs's performance. You would just need to:

  • Change Yjs's integrate method (or make an alternative) which used slightly different logic for concurrent edits
  • Store seq instead of originRight in each Item
  • Store maxSeq in the document, and keep it up to date and
  • Change Yjs's binary encoding format.

I talked to Kevin about this, and he doesn't see any point in adding RGA support into his library. It's not something anybody actually asks for. And RGA can have weird interleaving when prepending items.

For diamond, I make my code accept a type parameter for switching between Yjs and automerge's behaviour. I'm not sure if I want to. Kevin is probably right - I don't think this is something people ask for.

Well, there is one way in which Yjs has a definite edge over automerge: Yjs doesn't record when each item in a document has been deleted. Only whether each item has been deleted or not. This has some weird implications:

  • Storing when each delete happened has a weirdly large impact on memory usage and on-disk storage size. Adding this data doubles diamond's memory usage from 1.12mb to 2.34mb, and makes the system about 5% slower.
  • Yjs doesn't store enough information to implement per-keystroke editing replays or other fancy stuff like that. (Maybe thats what people want? Is it weird to have every errant keystroke recorded?)
  • Yjs needs to encode information about which items have been deleted into the version field. In diamond, versions are tens of bytes. In yjs, versions are ~4kb. And they grow over time as the document grows. Kevin assures me that this information is basically always small in practice. He might be right but this still makes me weirdly nervous.

For now, the master branch of diamond includes temporal deletes. But all benchmarks in this blog post use a yjs-style branch of diamond-types, which matches how Yjs works instead. This makes for a fairer comparison with yjs, but diamond 1.0 might have a slightly different performance profile. (There's plenty of puns here about diamond not being polished yet, but I'm not sharp enough for those right now.)

These benchmarks measure the wrong thing

This post only measures the time taken to replay a local editing trace. And I'm measuring the resulting RAM usage. Arguably accepting incoming changes from the user only needs to happen fast enough. Fingers simply don't type very fast. Once a CRDT can handle any local user edit in under about 1ms, going faster probably doesn't matter much. (And automerge usually performs that well already, barring some unlucky GC pauses.)

The actually important metrics are:

  • How many bytes does a document take on disk or over the network
  • How much time does the document take to save and load
  • How much time it takes to update a document stored at rest (more below)

The editing trace I'm using here also only has a single user making edits. There could be pathological performance cases lurking in the shadows when users make concurrent edits.

I did it this way because I haven't implemented a binary format in my reference-crdts implementation or diamond yet. If I did, I'd probably copy Yjs & automerge's binary formats because they're so compact. So I expect the resulting binary size would be similar between all of these implementations, except for delete operations. Performance for loading and saving will probably approximately mirror the benchmarks I showed above. Maybe. Or maybe I'm wrong. I've been wrong before. It would be fun to find out.

There's one other performance measure I think nobody is taking seriously enough at the moment. And that is, how we update a document at rest (in a database). Most applications aren't collaborative text editors. Usually applications are actually interacting with databases full of tiny objects. Each of those objects is very rarely written to.

If you want to update a single object in a database using Yjs or automerge today you need to:

  1. Load the whole document into RAM
  2. Make your change
  3. Save the whole document back to disk again

This is going to be awfully slow. There are better approaches for this - but as far as I know, nobody is working on this at all. We could use your help!

Edit: Kevin says you can adapt Yjs's providers to implement this in a reasonable way. I'd love to see that in action.

There's another approach to making CRDTs fast, which I haven't mentioned here at all and that is pruning. By default, list CRDTs like these only ever grow over time (since we have to keep tombstones for all deleted items). A lot of the performance and memory cost of CRDTs comes from loading, storing and searching that growing data set. There are some approaches which solve this problem by finding ways to shed some of this data entirely. For example, Yjs's GC algorithm, or Antimatter. That said, git repositories only ever grow over time and nobody seems mind too much. Maybe it doesn't matter so long as the underlying system is fast enough?

But pruning is orthogonal to everything I've listed above. Any good pruning system should also work with all of the algorithms I've talked about here.

Each step in this journey changes too many variables

Each step in this optimization journey involves changes to multiple variables and I'm not isolating those changes. For example, moving from automerge to my reference-crdts implementation changed:

  • The core data structure (tree to list)
  • Removed immutablejs
  • Removed automerge's frontend / backend protocol. And all those Uint8Arrays that pop up throughout automerge for whatever reason are gone too, obviously.
  • The javascript style is totally different. (FP javascript -> imperative)

We got 10x performance from all this. But I'm only guessing how that 10x speedup should be distributed amongst all those changes.

The jump from reference-crdts to Yjs, and from Yjs to diamond are similarly monolithic. How much of the speed difference between diamond and Yjs has nothing to do with memory layout, and everything to do with LLVM's optimizer?

The fact that automerge-rs isn't faster than automerge gives me some confidence that diamond's performance isn't just thanks to rust. But I honestly don't know.

So, yes. This is a reasonable criticism of my approach. If this problem bothers you, I'd love for someone to pull apart each of the performance differences between implementations I show here and tease apart a more detailed breakdown. I'd read the heck out of that. I love benchmarking stories. That's normal, right?

Appendix C: I still don't get it - why is automerge's javascript so slow?

Because it's not trying to be fast. Look at this code from automerge:

This is called on each insert, to figure out how the children of an item should be sorted. I don't know how hot it is, but there are so many things slow about this:

  • I can spot 7 allocations in this function. (Though the 2 closures should be hoisted). (Can you find them all?)
  • The items are already sorted reverse-lamportCompare before this method is called. Sorting an anti-sorted list is the slowest way to sort anything. Rather than sorting, then reverse()'ing, this code should just invert the arguments in lamportCompare (or negate the return value).
  • The goal is to insert a new item into an already sorted list. You can do that much faster with a for loop.
  • This code wraps childId into an immutablejs Map, just so the argument matches lamportCompare - which then unwraps it again. Stop - I'm dying!

But in practice this code is going to be replaced by WASM calls through to automerge-rs. Maybe it already has been replaced with automerge-rs by the time you're reading this! So it doesn't matter. Try not to think about it. Definitely don't submit any PRs to fix all the low hanging fruit. twitch.


This post is part of the Braid project and funded by the Invisible College. If this is the sort of work you want to contribute towards, get in touch. We're hiring.

Thankyou to everyone who gave feedback before this post went live.

And special thanks to Martin Kleppmann and Kevin Jahns for their work on Automerge and Yjs. Diamond stands on the shoulders of giants.

Comments on Hacker News

2021 Seph Gentle https://github.com/josephg/

All Comments: [-] | anchor

hinkley(10000) 3 days ago [-]

I'm getting mixed messages on CRDTs. Are we at the point now where they are general enough that the human observer is not constantly confronted with 'surprises' from the behavior of the system?

Some of the talks by Kleppmann go straight into the weeds and make it hard to tell if he's just nerding out about finer points or lamenting unsolved problems, or even paradoxes.

josephg(10000) 3 days ago [-]

As a community, we're in the process of crossing that river right now. A few years ago it was an accomplishment to get a text based CRDT working at all. Now implementations are starting to compete on features and performance, and they're starting to see some use in real world applications. But there's still some edge cases to iron out and understand in terms of memory size and pruning and things like that.

In a few years the rough edges will be ironed out and well understood, and there will be a good set of CRDT implementations you could use without worrying about this stuff. I think Yjs might already be there.

JW_00000(10000) 3 days ago [-]

By the way, as someone who has published academic papers, if you're ever bothered about a paper or have some comments, don't hesitate to mail the authors. (Their e-mail addresses are always on the paper; especially target the first author because they have normally done the work.) We are happy to hear when someone has read our work and I at least would've liked to have known if someone found a problem with my papers.

zladuric(10000) 3 days ago [-]

> especially target the first author because they have normally done the work

As someone living with a recently promoted? (is that the correct term?) PhD in social sciences, this surprises me. Is that something specific for my country, for social sciences or my wife simply landed in a case full of rotten apples?

lewisjoe(10000) 3 days ago [-]

I've been looking for a practical OT alternative for our online word processor (https://zoho.com/writer). We already use OT for syncing our realtime edits and exploring CRDTs targetting stronger consistency for tackling offline edits (which are typically huge & defragmented, since the edits are not syncing in realtime)

So the baseline is that OT has a better model for holding state in terms of performance/memory, since the edits can be compiled into plain string types. CRDTs in comparison forces us to hold deleted states as well and demands richer information per unit (character/string/etc) - which makes it harder on the CPU/RAM.

Here's the story as I understand:

1. Automerge tackles this by just moving to a better lower-level runtime: Rust.

2. Yjs handles this by using a similar technique i.e relying on V8's hidden classes to handle the performance optimizations and assuming real-world cases to narrow down and optimize datastructures.

But none of these, seem to be a fundamental breakthrough in the efficiency of the algorithm itself. They all at best look like a workaround and this keeps bothering me.

kevinjahns(10000) 3 days ago [-]

I know that it is hard to comprehend why modern CRDT implementations are fast. But the data confirms that they work great. OT seems to be much simpler, but there are real advantages in using CRDTs. The performance problems have been solved through an efficient representation of the CRDT model.

The gist of the below [1] read is that it is impossible for a human to create a document that Yjs can't handle (even in the worst case scenario). But yes, it handles real-world scenarios particularily well.

The concept of 'hidden classes' is super old. It has first been implemented in a fork of smalltalk and then became foundational concept of runtime engines for scripting languages. It is implemented in V8, python, ruby, spidermonkey, ..

Yjs does not assume a 'real-world scenario' and it is not optimized for any specific runtime engine. It runs fast in any browser. The benchmarks confirm this. [2]

Yjs is being used in practice by several companies (eg Nimbus Notes with >3 million users) for quite some time now. I'm not aware of any performance problems.

[1]: https://blog.kevinjahns.de/are-crdts-suitable-for-shared-edi... [2]: https://github.com/dmonad/crdt-benchmarks

pfraze(10000) 3 days ago [-]

You can remove tombstones in a cleanup pass if you constrain behavior a bit.

For instance, if there's just a single server, after 5 minutes of no active connections you could clean out tombstones. After that, if a client connects with some changes they had been holding onto but got DCed, you can reject the write and let the user's client merge by some other means (perhaps even manual).

josephg(10000) 3 days ago [-]

If you've got big offline edits (or you're merging multiple large sets of edits), even existing CRDTs will generally handle that more efficiently than OT will. OT algorithms are usually O(n * m) time complexity when merging n edits from one peer with m edits from another peer. A CRDT like diamond-types is O((n + m) * log(s)) where s is the current size of the document. In practice its super fast.

As for holding deleted states and richer information per unit, its not so bad in absolute terms. 1-2mb of data in memory for a 17 page document is honestly fine. But there's also a few different techniques that exist to solve this in CRDTs:

1. Yjs supports 'garbage collection' APIs. Essentially you say 'anything deleted earlier than this point is irrelevant now' and the data structures will flatten all runs of items which were deleted earlier. So storage stays proportional to the size of the not-deleted content.

2. Sync9 has an algorithm called 'antimatter' which mike still hasn't written up poke poke mike!. Antimatter actively tracks the set of all peers which are on the network. When a version is known to have been witnessed by all peers, all extra information is safely discarded. You can also set it up to assume any peer which has been offline for however long is gone forever.

3. Personally I want a library to have an API method for taking all the old data and just saving it to disk somewhere. The idea would be to reintroduce the same devops simplicity of OT where you can just archive old history when you know it probably won't ever be referenced again. Keep the last week or two hot, and delete or archive history at will. If you combined this with a 'rename' operation, you could reduce the 'hot' dataset to basically nothing. This would also make the implementation much simpler - because we wouldn't need all these performance tricks to make a CRDT like diamond-types fast if the dataset stayed tiny anyway.

vanderZwan(10000) 3 days ago [-]

On a meta-level, does anyone else think that the whole idea of writing a peer reviewed paper that is just a benchmark of different algorithms should be really rigorously reviewed before being accepted? Writing good benchmarks is hard, and so highly contextual that writing fair comparisons beteen algorithms (or data structures) is almost impossible unless you're an expert in all of the algorithms involved.

thysultan(10000) 3 days ago [-]

Only the holy experts can divulge the realm of gods that is benchmarking and reveal to us the results. How—to do we beseech them o wise one?

peq(10000) 2 days ago [-]

It would be nice if pure benchmark papers were a thing. Most of the time system papers get accepted for some new idea. The evaluation section is often biased towards the new idea. Independent benchmarks could fix this.

Diggsey(10000) 3 days ago [-]

Yeah, I've also seen several academic papers on performance or 'optimization' of existing algorithms which just demonstrate a complete lack of knowledge about how those algorithms are implemented in practice.

For example, there was a paper explaining how you could optimize the GJK algorithm by reducing the number of distance checks required, and in turn the number of square-roots... Despite the fact that everyone (including the authors of the original GJK algorithm) knows that you don't actually need to do a square-root to compare distances...

lostdog(10000) 3 days ago [-]

Benchmarking papers are inaccurate when the original algorithms are not open sourced, and the grad student needs to rewrite the algorithm from scratch. They can easily create different implementation details, and wind up with an algorithm that's slower than the original.

I do think that the original algorithm authors should have the opportunity to correct the benchmarking code, or to release their original implementation as open source to be benchmarked.

In some sense, the benchmarking paper with a slower implementation is more 'correct,' since an engineer evaluating which algorithm to use is just as likely to implement the algorithm in a slower way than the original paper. The incentives are right too: the original paper author should be giving enough details to recreate their work, and the benchmarker is showing that really their published algorithm is slow.

knuthsat(10000) 3 days ago [-]

Problem is that academics are rarely experts at programming or have knowledge of computer architectures as much as someone in the industry. There are various tricks that are never taught at college, therefore academics have no idea some stuff even exists.

Best example is discrete optimization research (traveling salesman, vehicle routing and its variants, schedule rostering etc.). Stuff you find in the papers there achieves state-of-the-art results very slowly (using integer linear programming or some rarely optimized heuristics) making you believe these instances of a general NP-hard problem can't be solved quickly.

When you start tinkering, you either find that data structures can be added that reduce the complexity significantly or that there are regularities in instances that, when exploited, support massive speedups.

I would say that TSP research is an exception but most of stuff coming out that has a lot citations is way too slow and is never as brilliantly implemented as Lin Kernighan heuristic or other stuff from the age of insanely slow computers.

comicjk(10000) 3 days ago [-]

On the one hand, peer review takes long enough already. On the other... I saw an influential paper that published obviously-wrong timing data, essentially saying that time(A) + time(B) < time(B). It seems they were including the Python interpreter startup time (~0.1s) in a profile of a ~0.01s algorithm.

z3t4(10000) 3 days ago [-]

What I like about 'tests' in software development is that anyone can run them, just download the source code, then run ./test or right click and 'run tests'. It would be cool if computer science could offer the same experience, just download the source code and run it, compare if you got the same result, inspect and learn from the source code, etc. Instead of 'here's some pseudo-code we've never tried', and here's a mathematical formula that you need to be a mathematics professor to understand... Yes we know you are not a professional software developer, the code is going to be at a beginners level, but that is fine, I am not reading your paper to criticize your code for being 'impure', or not using the latest syntax and frameworks, I'm reading it to understand how to implement something, to solve a problem.

zwiek(10000) 2 days ago [-]

I also cannot understand how a paper whose main contribution is a set of benchmarks, does not actually make the source code to those benchmarks publicly available. Unbelievable that such a paper can pass peer review. Very unscientific.

paulgb(10000) 3 days ago [-]

This is great! I'd like to quote a line here, because I think the answer is "someone on HN knows" and I'd like to hear the answer as well.

> V8 is actually suspiciously fast at this part, so maybe v8 isn't using an array internally to implement Arrays? Who knows!

bouk(10000) 3 days ago [-]

The V8 blog is a good starting point for learning how it works under the hood: https://v8.dev/blog/fast-properties

stephc_int13(10000) 3 days ago [-]

Trees are a powerful and practical data structure, but even if it does not appear clearly when doing O(n) style complexity analysis, they are usually slow.

Unfortunately, the difference between slow and fast can be several orders of magnitude, while the perception of the programmer doing a back of the envelope analysis seems to be a logarithmic scaling of the reality...

josephg(10000) 3 days ago [-]

Trees seemed to work pretty well for me here!

The problem with trees is usually that people don't pack enough data into each node in the tree. I implemented a skip list in C a few years ago[1]. For a lark I benchmarked its performance against the C++ SGI rope class which was shipped in the C++ header directory somewhere. My skip list was 20x faster - which was suspicious. I looked into what the SGI rope does and it turns out it was only putting one character into each leaf node in the tree it constructed. Benchmarking showed the optimal number was ~120 or so characters per leaf. Memcpy is much much faster in practice than main memory lookups.

In diamond-types (benchmarked here), the internal nodes in my B-tree store 16 pointers and leaf nodes store 32 entries. With run-length encoding, all 180,000 inserted characters in this data set end up in a tree with just 88 internal nodes and a depth of 3. It goes fast like this. But if you think an array based solution would work better, I'd love to see it! It would certainly need a lot less code.

[1] https://github.com/josephg/librope

feikname(10000) 3 days ago [-]

Correct me I'm mistaken

The difference between diamond native and diamond WASM demonstrates how, even with WASM, native implementations beat browsers hard, and native implementations performance-wise are still very worth, specially for lower powered devices, and, perhaps, reducing battery usage (as consequence of less CPU use) in mobile devices.

Jweb_Guru(10000) 3 days ago [-]

The wasm implementation here was still running under a JavaScript test harness, so I suspect it's the JS-WASM boundary interactions that are causing the slowdown. WASM itself (if it doesn't need to interact with JavaScript) usually runs with a much smaller performance penalty.

__s(10000) 3 days ago [-]

Yes. Ultimately WASM is executing within a sandbox & involves being JIT compiled (read: not heavily optimized except for hot loops eventually). If native compilation is an option it makes sense to go that route

WASM competes with asm.js not asm (or, arguably, jvm etc)

lewisjoe(10000) 3 days ago [-]

People who are interested in the topic: I just found out they have open meetings about the parent project and seems like anybody could join - https://braid.org/

Great way to share progress. Kudos! :)

toomim(10000) 3 days ago [-]

Yep! Our next meeting is in two Mondays from now on August 2nd, at 4:00pm Pacific Time. All are welcome: https://braid.org/meeting-16

josephg(10000) 3 days ago [-]

Hello HN! Post author here. I'm happy to answer questions & fix typos once morning rolls around here in Australia

fulafel(10000) about 3 hours ago [-]

Terminology nit: cache coherence refers to CPU cache implementation behaviours at hw level in presence of concurrent access from multiple cores. Data locality or cache friendly data layout could work better here.

gfodor(10000) 3 days ago [-]

Thanks for ShareDB. It's dope. I extended it to support collaborative voxel editing (https://jel.app) and works great.

teodorlu(10000) 3 days ago [-]

Have you used CRDTs to solve any practical problems?

If so, how does the CRDT solution compare to a non-CRDT solution? If a non-CRDT solution is feasible at all?

username91(10000) 3 days ago [-]

It's a great article - really informative and enjoyable to read. Thanks for making it happen. :)

conaclos(10000) 3 days ago [-]

Hi josephg, I'm a CRDT researcher. This is great to see so much work around CRDT nowadays!

Some optimizations whom you discuss are already proposed by some papers and implementations.

For instance, LogootSplit [1] proposes an implementation based on an AVL tree with extra metadatas to get a range tree. LogootSplit proposes also a block-wise approach that stores strings instead of individual characters. Xray [2], an experimental editor built by Github and written in Rust, uses a copy-on-write B-tree. Teletype [3] uses a splay tree to speedup local insertions/deletions based on the observation that a user performs several edits on the same region.

[1] https://members.loria.fr/CIgnat/files/pdf/AndreCollabCom13.p... [2] https://github.com/atom-archive/xray [3] https://github.com/atom/teletype

mirekrusin(10000) 3 days ago [-]

It seeems that the issue of reproducibility in computer science where no gigantic/proprietary datasets are needed should not be a problem by simply publishing repository with the code. Are there any forces present that make it so rare in practice?

xcombelle(10000) 2 days ago [-]

I believe that I understood the code tagged as follow

> (But don't be alarmed if this looks confusing - we could probably fit everyone on the planet who understands this code today into a small meeting room.)

and the follow up reading confirm what I believed about this code

should I be worried about myself ?

robmorris(10000) 3 days ago [-]

That's an impressive optimisation! Out of curiosity, what do you think are the most interesting or useful possible applications for an optimised CRDT?

When you're approaching an optimisation like this, do you mind me asking how you think about it and approach it?

politician(10000) 3 days ago [-]

Great post! I had no idea that list CRDTs could actually be fast because I read the papers showing how they were impractical. Thanks for investigating and writing this up — please accept my offer of a legitimate academic credential.

GlennS(10000) 3 days ago [-]

When optimizing `findItem`, did you consider storing the original index of each item on itself and using that as a starting point?

Obviously this might move later (maybe it can only increase?), but usually not by much, so I would guess it would make an efficient starting point / be immediately correct 99% of the time?

Looks like you already have 2 good solutions to this though (start from index of recent edits and range tree).

benjismith(10000) 3 days ago [-]

I've been following your work for years (and I'm actually neck-deep in a ShareDB project right now) so I just want to say thank you for all of your contributions! I especially enjoyed this post.

pookeh(10000) 3 days ago [-]

Wait it doesn't look like you used the performance branch of automerge (which is now merged into master). It is significantly faster.


lewisjoe(10000) 3 days ago [-]

Thank you for writing this piece Joseph.

Just want to make sure if something's a possible typo or I'm getting it all wrong :)

Quote: 'But how do we figure out which character goes first? We could just sort using their agent IDs or something. But argh, if we do that the document could end up as abcX, even though Mike inserted X before the b. That would be really confusing.'

Since the conflict is only between the children of (seph, 0) the only possibilities are, either ending up with 'aXbc' or 'abXc' right? Or is there a legitimate possibility of ending up with 'abcX' ?

I'm assuming we'll apply a common sorting logic only to clashing siblings.

trishume(10000) 3 days ago [-]

Have you seen my Xi CRDT writeup from 2017 before? https://xi-editor.io/docs/crdt-details.html

It's a CRDT in Rust and it uses a lot of similar ideas. Raph and I had a plan for how to make it fast and memory efficient in very similar ways to your implementation. I think the piece I got working during my internship hits most of the memory efficiency goals like using a Rope and segment list representation. However we put off some of the speed optimizations you've done, like using a range tree instead of a Vec of ranges. I think it also uses a different style of algorithm without any parents.

We never finished the optimizing and polished it up, so it's awesome that there's now an optimized text CRDT in Rust people can use!

ta988(10000) 3 days ago [-]

This was a great read, thank you. I wish there were more explanations of the 'black magic' part of Yjs. I'll have to dig into that.

thechao(10000) 3 days ago [-]

I love high level systems languages like C/++ and Rust... but everything you said about JavaScript being slow is the same thing assembly programmers experience when optimizing high level systems languages.

In general, when I see C code and I'm asked to speed it up, I always use "100x" as my target baseline.

Majromax(10000) 3 days ago [-]

When you write:

> Yjs does one more thing to improve performance. Humans usually type in runs of characters. So when we type 'hello' in a document, instead of storing ['h','e','l','l,'o'], Yjs just stores: ['hello']. [...] This is the same information, just stored more compactly.

Isn't this not just the same information when faced with multiple editors? In the first implementation, if I pause to think after typing 'hel', another editor might be able to interject with 'd' to finish the word in another way.

In my view, these data structures are only 'the same information' if you provide for a reasonably-sized, fixed quantum of synchronization. The merging makes sense if e.g. you batch changes every one or two seconds. It makes less sense if you would otherwise stream changes to the coordinating agent as they happen, even with latency.

kohlerm(10000) 3 days ago [-]

Very nice, when I read 'double linked list' I immediately thought 'what about a btree like structure?' I guess Martins idea to replace the IDs comes from the 'vector clock' idea for concurrent updates

audidude(10000) 3 days ago [-]

If anyone is looking for the combination of a piecetable and b+tree (which appears to be what is talked about in this article), I have one that I've been using for years across various GTK components.


gritzko(10000) 3 days ago [-]

About a decade ago, I implemented the Causal Tree CRDT (aka RGA, Timestamped Insertion Tree) in regular expressions using a Unicode string as a storage. Later we made a collaborative editor for Yandex based on that code. It used many of the tricks as described in the text, even the optimization where you remember the last insertion point. So I am terribly glad it all gets rediscovered.

The code is on GitHub [1] There is also an article which might be a tough reading, so no link. Recently I greatly improved CTs by introducing the Chronofold data structure [2]. Regarding that benchmark article, I spent many years in academia, so the quality of content problem is familiar to me. In other words, I don't take such articles seriously. CRDTs are fast enough, that is not a concern.

[1]: https://github.com/gritzko/citrea-model

[2]: https://arxiv.org/abs/2002.09511

omgtehlion(10000) 3 days ago [-]

That is nice!

A couple of questions: Do you have released a CT implementation in top of Chronofold? Have you any plans to benchmark it against other algs?

_hl_(10000) 3 days ago [-]

I remember seeing that (regex CTs) and immediately thinking 'wtf, why would anyone want to do that'. Took me quite a while to understand that it's actually a pretty clever way to write fast state machines in browserland. So thank you for this work!

tekkk(10000) 3 days ago [-]

Excellent article! As someone who has to work with collaborative editing I must say the complexity of the whole area is at times daunting to say the least. So many edge-cases. So many mines to step on.

Now I think I am convinced that the OT vs CRDT performance comparison is kind of moot point and the question is more about the user experience. Which version produces nicer results when two very diverged documents are merged. Maybe one of these days I'll read an article about that too.

To get off on a tangent a little bit, I'd be interested to know how one could add in tracking of changes to Diamond or other CRDT? Can you add an arbitrary payload to the operation and then just materialize the areas different from the original snapshot? I know Yjs can do this by creating snapshots and then comparing them to another snapshot but it seemed a bit awkward and not suited for real-time editing.

josephg(10000) 3 days ago [-]

> Which version produces nicer results when two very diverged documents are merged.

From the user's perspective merging behaviour is basically identical in all of these systems.

Diamond supports full per character change tracking. So you know who authored what. I think Yjs does this too. I'm not sure what you mean about materialising areas differently? I'd like to have full branch support in diamond at some point too, so you can work in a branch, switch branches, merge branches, and all of that.

uyt(10000) 3 days ago [-]

I think this data structure is usually called a Counted B-tree https://www.chiark.greenend.org.uk/~sgtatham/algorithms/cbtr... instead of range tree

cryptonector(10000) 1 day ago [-]

Xi has/had a rope library that allowed one to apply many monoids at each internal node. So one could search for a position in the document as TFA is doing, but also count bytes, Unicode codepoints, Unicode characters/glyphs/widths, etc. with just one tree.

What's common to xi's approach and TFA's is monoids. Monoids are at the heart of CRDT.

Historical Discussions: 1 out of every 153 American workers is an Amazon employee (July 30, 2021: 634 points)

(637) 1 out of every 153 American workers is an Amazon employee

637 points 4 days ago by pseudolus in 10000th position

www.businessinsider.com | Estimated reading time – 3 minutes | comments | anchor

  • Amazon employs 950,000 workers in the US, the company said in its latest earnings report.
  • The US has a population of 261 million and an employed non-farm workforce of 145 million, per the BLS.
  • More people work for Amazon than are employed in the entire residential construction industry.

Loading Something is loading.

Amazon has made more than $221 billion in sales in 2021 so far, showing just how massive the company has become since Jeff Bezos founded it in 1994.

Today the ecommerce giant employs 1.3 million people around the world, with 950,000 of those in the US, the company said in its latest earnings release.

According to the most recent US employment report, there are 145.8 million nonfarm payroll workers out of a total population of 332 million.

That means one out of every 350 Americans works for Amazon, or one out of every 153 employed workers in the US.

More people work for Amazon than are employed in the entire US residential construction industry, which is responsible for 873,000 jobs.

Even with its massive scale, Amazon is still a distant second to the country's largest private employer, Walmart, which employs nearly 1.6 million people in the US, or one out of every 91 workers.

While it's possible that more people work at a McDonald's than either Amazon or Walmart — the fast-food brand estimates more than 2 million globally — the company primarily operates on a franchise model, so it directly employs less than 50,000 in the US.

Along with Amazon's size, its decision to implement a $15 minimum wage across the company has had a measurable effect in the communities where it does business. It has also forced other large employers to follow suit.

In May, Amazon announced plans to hire 75,000 delivery and logistics workers at a $17 starting wage and a possible $1,000 bonus.

But last month, a New York Times report found that Amazon had a turnover rate of about 150% every year among hourly employees, leading some executives to worry about running out of hirable employees in the US.

In other words, with so many current and former Amazonians in the US, there's a good chance that you know someone who's worked there.

All Comments: [-] | anchor

justinzollars(10000) 4 days ago [-]

I have to say - Amazon is almost the only thing that works during COVID. Everything else is pegged.

I've been waiting 6 months to buy a couch, every time I check on an update, its delayed. Government services are impossible to work with, try getting a passport or license. If you have an emergency every office is closed or dysfunctional. Amazon on the other hand works fine. I'm so thankful for it.

kwhitefoot(10000) 3 days ago [-]

> Amazon is almost the only thing that works during COVID

Only in the US, thank goodness. Or I suppose I should say: everything is still working here in Norway, not quite at 100% but close.

Why is everything but Amazon underperforming in the US? It sounds like mismanagement to me.

ketzo(10000) 4 days ago [-]

Yeah, there's a reason Amazon is on or near the top of those 'which institutions do you trust most?' polls. Regardless of what you think about the company's practices/dominance/anything else, there is a lot to love about consistency.

annoyingnoob(10000) 4 days ago [-]

I cancelled all of my 'subscribe and save' items on Amazon last year because they stopped delivering them. I think Amazon delivered maybe half of what I ordered in 2020. I've all but given up on Amazon at this point.

barbazoo(10000) 4 days ago [-]

You're comparing apples to oranges.

Would you be able to buy a couch on Amazon? The #1 bestseller on Amazon currently says 'Currently unavailable. We don't know when or if this item will be back in stock.' so it doesn't work here the same way other retailers don't work.

So you're saying you can't get a passport from the government but you're able to buy certain things on Amazon therefore Amazon is better?

mcguire(10000) 4 days ago [-]

Burying the lede: 'Amazon is still a distant second to the country's largest private employer, Walmart, which employs nearly 1.6 million people in the US, or one out of every 91 workers.'

PaulDavisThe1st(10000) 4 days ago [-]

It's the derivative that matters.

buescher(10000) 4 days ago [-]

Neal Stephenson had the delivery part right.

hammock(10000) 4 days ago [-]

There are still more Walmart employees than Amazon employees.

samatman(10000) 4 days ago [-]

Movies, music, and microcode are holding up pretty well, for that matter.

airstrike(10000) 4 days ago [-]

He got so much right on that book it's downright eerie

itisit(10000) 4 days ago [-]

Fun fact: the name of the service AWS Sumerian comes from Snow Crash!

1001101(10000) 4 days ago [-]

In 2021, the deliverators are robots (Dominoes Pizza Nuro N2 - https://selfdrivingdelivery.dominos.com/en)

nicbou(10000) 4 days ago [-]

You have a friend in the family

JacobDotVI(10000) 4 days ago [-]

Now do it for:

* US DoD * Walmart * McDonalds * USPS

paxys(10000) 4 days ago [-]

> While it's possible that more people work at a McDonald's than either Amazon or Walmart — the fast-food brand estimates more than 2 million globally — the company primarily operates on a franchise model, so it directly employs less than 50,000 in the US.

hncurious(10000) 4 days ago [-]

Walmart, which has a presence in communities of all shapes and sizes, is the largest private employer in the nation with 1.5 million workers. Yet the number of Americans who rely on the corporate giant for their livelihoods is dwarfed by the number who rely on the federal government for their paychecks. The federal government employs nearly 9.1 million workers, comprising nearly 6 percent of total employment in the United States. The figure includes nearly 2.1 million federal employees, 4.1 million contract employees, 1.2 million grant employees, 1.3 million active duty military personnel, and more than 500,000 postal service employees.


dalbasal(10000) 4 days ago [-]

Also consider that quite a lot of people work within Amazon's greater sphere, with degrees of dependence ranging from significant to absolute. Eg all the people employed in parts of the ecommerce industry where amazon wields most of the power. AWSland. Etc.

Since everyone is using government works as the comparison, government employees + government contractors, and those in the government contracting sphere.

so, yeah... they're big. At the very least, amazon jobs are now a standard of sorts. What they do and/or don't do as an employer is what's normal.

hammock(10000) 4 days ago [-]

There are still more Walmart employees than Amazon employees.

whoknowswhat11(10000) 4 days ago [-]

If you work in the book world, in a ton of small businesses that sell through or buy from amazon, they are a monster. Their delivery drivers seem to be everywhere these days (we have I think two deliveries per day in my area - one pass is as late as like 10PM - not your parents USPS).

xfalcox(10000) 4 days ago [-]

It's bizarre. I'm not even american but Amazon is such a big part of my day to day life.

- At work we use AWS

- Amazon uses my company software

- My wife is a retailer and now sells on Amazon

- During work I use Twitch.TV as background noise (Amazon bought Twitch.TV)

- Last week, after work I was playing the new Amazon MMO game.

- After dinner I was watching The Office on Amazon Prime

BrissyCoder(10000) 4 days ago [-]

Welcome to Hell World.

haunter(10000) 4 days ago [-]

One of the main reason I slowly started removing american media (movies, music, series, books, games etc) from my life a decade ago or so

ok2938(10000) 4 days ago [-]

I do not envy you.

mahathu(10000) 4 days ago [-]

Such a pretty house

And such a pretty garden

And no alarms and no surprises

simonw(10000) 4 days ago [-]

'The US has a population of 261 million and an employed non-farm workforce of 145 million, per the BLS'

Anyone know why the 'non-farm workforce' is the number reported here?

JustARandomGuy(10000) 4 days ago [-]

It's nothing malicious; it's a common way to express employment figures. Farm payrolls tend to swell and contract seasonally (to pick/plant and much less work in-between) so 'non-farm workforce' is a way of smoothing out the numbers.

traceroute66(10000) 4 days ago [-]

Rant time.... ;-)

What's with these stupid 'X in Y' numeric expressions that the dumb media insist on continuing to heap on the world ?

Why not consistently use a standardised means of comparison.

Like, I don't know .... percent. Or 'X in 100' if you think your newspaper/blog/website readership are too dumb to know what the % symbol means. The clue's in the name FFS ... per... cent ... that's what its there for !

6gvONxR4sf7o(10000) 4 days ago [-]

X in <CONSTANT> is just the inverse of <CONSTANT> in X. Each is appropriate at different times. 0.0004 is the same as 1 in 2500. I feel like 1 in 2500 is more intuitive, and that's generally the case for rare events, because it is oriented around answering the question 'how rare is this?'

throwslackforce(10000) 4 days ago [-]

You can't have a fraction of a person. The probability of having .65 out of 100 people working for Amazon is zero, no matter which 100 people you select. 1 out of 153 can at least happen.

neolog(10000) 4 days ago [-]

'1 in Y' is simpler than 'X in 100'. 100 introduces an irrelevant big number.

seattle_spring(10000) 4 days ago [-]

Why is '1 in X' so much worse than 'X in 100'?

rrrrrrrrrrrryan(10000) 4 days ago [-]

I generally agree with your sentiment, but not in this case. They're doing it here because fractional humans don't exist in real life, and the measure isn't being used for comparison.

'1 in 153 people' is easier for the human mind to visualize than '0.65 out of every 100 people.'

kube-system(10000) 4 days ago [-]

Ratios and fractions are covered in (hopefully?) every grade school, usually right along side percentages. They're an entirely valid representation of data.


screye(10000) 4 days ago [-]

The convenience of Amazon is amazing, in part because American urban design and malls suck.

In denser neighborhoods there is a certain charm to walking around dense streets, and sometimes randomly entering stores that catch your eye. Having malls be this dreary location that you must visit to get anything, makes amazon look so much more attractive.

Similarly, small roadside stores in high foot traffic areas can end up being sustainable due to a high number of customers per second. On the other hand, malls are built to be large and inefficient in a manner that almost feels like it's by design.

There are a huge number of products that benefit from use-before-you-buy. Brick & mortar is innately profitable in this scenario. Brick & mortar stores can also facilitate warranty more easily and will generally have fewer returns. Usually, they would have be able to stand against online shopping in those product categories. However, most US cities lack areas that would naturally see high foot-traffic. This makes it impossible to actually run a physical store. As for the other product categories, I am so glad online shopping became a thing.

hoppyhoppy2(10000) 3 days ago [-]

>There are a huge number of products that benefit from use-before-you-buy. Brick & mortar is innately profitable in this scenario.

Unless that scenario turns unto 'use (in the store) before you buy (on Amazon),' which retailers have been complaining is often what happens.

kaydub(10000) 4 days ago [-]

I wonder how it looks if you consider all the companies running on AWS and the engineers that only use AWS. 1 out of 153 working for Amazon, but I guess what I'm wondering is how many depend on AWS.

manquer(10000) 4 days ago [-]

In that case you should also consider all vendors/suppliers who manufacture and ship for amazon either exclusively or >50% . That number would be lot larger than engineers on AWS.

Amazon's scale and reach is frighteningly massive.

pm90(10000) 4 days ago [-]

AWS is the magic that allows Amazon to scale its operations. I wouldn't be surprised if the Retail business was getting discounted (possibly free) rates for AWS products.

If the US was serious about antitrust, breaking AWS from Retail would probably be the best thing it could do.

runnerup(10000) 4 days ago [-]

I wonder what the largest private employers as % of total labor force have been throughout history.

We currently have a labor force of ~160 million people and Walmart employs '1 out of every 72 American workers' or ~1.3%

phreeza(10000) 4 days ago [-]

Wasn't basically everyone an employee of one 'company' in communist countries?

rossmmurray(10000) 4 days ago [-]

The UK's National Health Service (NHS) is definitely up there.

There are 1.6m people working for NHS and about 41m people of working age in the UK. So that's about 1 in 26 (or 3.9%).

Sources: https://www.kingsfund.org.uk/projects/nhs-in-a-nutshell/nhs-..., https://fred.stlouisfed.org/series/LFWA64TTGBQ647S

austincheney(10000) 4 days ago [-]

I don't know about private. But the largest employer in the world is the US Department of Defense employing an estimated 2.8 million people which is about triple what Amazon employs.

WalMart has about 2.2 million world wide of which 1.5 million are US employees.

Amazon, according to the article, has about 950,000 employees.



flohofwoe(10000) 4 days ago [-]

Probably the East India Company. It controlled 'half of the world's trade', had 50k employes, its own navy and army with a quarter million soldiers (although most of those recruited from local populations). All the while England's and Wales' population was 5 million people at the start of the 18th century.

underseacables(10000) 4 days ago [-]

I'm torn badly with Amazon. After read The Everything Store, and what has been written all over the place, Jeff Bezos is a massive turd. Employees are horribly treated, wages suppressed, and all sorts of terrible and abusive practices.

Then it comes time for me to buy something. I needed a new pair of size 14 sneakers. I drove to Adidas, Footlocker, dicks, and a few other stores but I just couldn't justify $100 sneakers that didn't look like prison issued.

Opened my Amazon app in my car after leaving the crowded mall, and find what appear to be a decent pair of shoes. FakeSpot agreed with the reviews, and I bought then for $35. They will arrive tomorrow.

That kind of convenience is terribly addicting. I haven't figured out the solution, but I remember what it was like when Walmart came to town, put others out of business, mistreated employees, etc. We were unable to stop it then, how the heck are we going to stop it now?

So aside from "just stop buying from Amazon" what can we do ?

asdffdsa(10000) 3 days ago [-]

If you follow the link at the bottom of this comment it will lead you to $20 shoes from Target. They usually have the option to get it shipped to the store of your choice within several days. You can do a similar thing for other goods you need.

There are also discount/used clothing stores like TJ Maxx which have cheaper clothes.

Relatively speaking, it's not difficult to buy things not from Amazon.


sathackr(10000) 3 days ago [-]

Not sure why it's so hard for some people.

When Amazon Prime was real 2-day shipping and they'd send something via overnight air just to get it to you in time, it was nice. And would have been hard to give up.

Now that it just means free shipping, and you might get it tomorrow and you might get it in a week, it's much easier just to buy somewhere else. Walmart. Target. BH Photo. Adorama. Best Buy. Tons of options.

jdavis703(10000) 4 days ago [-]

Aren't size 14 shoes on the long side of the tail? Amazon will always be better at that than a brick and mortar retailer.

giardini(10000) 3 days ago [-]

As I understand it, Amazon is already internally organized in relatively independent groups, each focusing on a particular market. If so, one possibility would be to:

a) break Amazon into many independent companies based on these already-established groups and

b) bar forever (or for some time) these groups from merging with each other.

The initial conditions (terms of the break-up: e.g., how much cash each entity would get) would determine whether, when cast out onto the street, each might survive.

throw123123123(10000) 3 days ago [-]

A company hiring near 1% of the workforce is not suppressing wages, it is a massive demand for wages.

monksy(10000) 4 days ago [-]

This is the same reason why we still have terrible economy experiences on airplanes. All of the airlines are focused on trying to maximimize revenue by forcing you to upgrade (if you want anything remotely non-terrible), and they all do it because they're all desperate to complete for the bottom.

(Well except for the MEA Carriers)

JoeQuery(10000) 4 days ago [-]

All you can do is do your best to not contribute to the problems you've identified.

No one wants to be the bad guy in their own story. That's why excuses exist.

Good luck. I understand where you're coming from.

shakezula(10000) 4 days ago [-]

> So aside from "just stop buying from Amazon" what can we do ?

The 'just stop buying amazon' arguments never made sense to me in the first place, because most people who use it are the ones budget-stretched in the first place. A lot of rural communities have no other viable options for some items as well.

It will take massive government action and that's it. There's no other way we can fix this problem. Wages are suppressed because it's _legal enough_ to suppress them. Labor fines basically become a cost of doing business.

After the union busting that went on during the Alabama Amazon unionization votes, after all of these labor complaints against Amazon coming under media attention and not a single thing being done about it, it's clear that there is truly nothing that will stop it except a general strike.

unchocked(10000) 4 days ago [-]

I'm pretty sure that commodity retail has always, for my living memory at least, been a pretty stressful, insecure, and low-wage position.

Amazon's scale brings welcome visibility to the problem, and offers employees the chance to unionize, but I'm puzzled by upper-class culture's seemingly new discovery that commodity retail is a shitty job.

prostoalex(10000) 4 days ago [-]

> Employees are horribly treated, wages suppressed

I never saw the definitive source on this, but around the time when the unionization vote was happening, people claiming experience in the logistics industry described Amazon as a place with low starting wages, but aggressive bonus structure for high performers. They also offered health benefits on day 1, which is unheard of in logistics.

Other warehouses in the area might have different compensation schemes, and people would generally gravitate to the one that suited their work style better.

The unionization vote seemed like it corroborated this thesis - it wasn't even 52/48, but something definitive, like 70/30 against the union.

Perhaps someone is aware of the data that incorporates take-home compensation with benefits and all, not the base rates advertised in job ads.

MattGaiser(10000) 4 days ago [-]

> So aside from "just stop buying from Amazon" what can we do?

Motivate other retailers to not be such lazy louts about technology?

Buttons840(10000) 4 days ago [-]

Half the reason I use Amazon is there web app doesn't suck. If I go to Walmart's website it will be ugly and slow and I'm not confident it will work well on my phone. Why can't Walmart get this right? I'm fine buying from Walmart and having them store my CC info like Amazon does, it's only that poor web app experience holding me back.

Maybe someone can make an Amazon competitor that is just a decent shopping app. I don't care if your store only has shoes, if I see it and think 'oh, this is another one of those non-shitty-web-app stores' I'm likely to come back.

magpi3(10000) 4 days ago [-]

It is a difficult problem. There is a saying: ease is the disease. Seeking the easy path (which most of us do) generally leads to supporting institutions not for their moral worth, but for their ability to make our lives easier. And so companies focus on exactly that: making our lives easier.

Taxes might help. If you tax a company for their immoral behavior, the cost of their goods go up and consumers might make different decisions. But in a democratic society the people who create taxes are elected by voters who want their lives to be easy.

Maybe the best thing would be to throw out democracy and be ruled by philosopher kings who can make these decisions for us. I only say that half-jokingly.

godot(10000) 4 days ago [-]

This is only tangentially related to your post at all, but as a budget-conscious shopper, I've come to realize that in America, there is usually a retail shop or two locally that consistently has the best deal for a category of things. Don't get me wrong, I still have to buy from Amazon for a lot of really miscellaneous things, but I think I manage to find good local deals (better than Amazon in most cases) for most category of items I buy, more so than most people. You just have to put in the work to do the research to find where those are (a lot of times that's about going to all these stores often enough that you get an understanding).

In your example of sneakers, Adidas/Footlocker/Dicks are not the places to go for me. Here in California there is a store called Big5 Sporting Goods that consistently has incredible prices (on sale or not). In the past decade I've pretty much never bought shoes anywhere else; most sneakers or hiking shoes I've bought are under $35 and they are the most comfortable shoes I've tried anywhere. There may be a similar store in whatever state you're in.

For clothes and some home goods items, similarly I wouldn't go to brand name stores at the malls; Ross and Marshall always have the best deals. For random home goods, Daiso (an Asian/Japanese brand store) which, fortunately for myself, has a lot of stores in California, has tons of super affordable options, as it's basically a Japanese dollar store. Then there are random things that Target has the best deals for; and other random things that Walmart has the best deals for (I know, buying from Walmart is not much better than buying on Amazon).

My main point is, you don't always have to resort to Amazon for budget, you just have to do enough research work to find out where else to go.

indigochill(10000) 4 days ago [-]

> That kind of convenience is terribly addicting.

Cigarettes are terrible addicting too and people quit them all the time.

As far as I can see, the low cost of the American lifestyle is largely subsidized by shifting the price onto others. Whether that's Amazon keeping prices low by exploiting labor in the west or Chinese manufacturers keeping prices low by exploiting labor in China or meat manufacturers keeping prices low by running factory farms, at the end of the day, the ethical choice is always going to have a higher price attached because you're eating the cost so others don't have to. The only solution is to put your money where your mouth is.

Others say legislation, but that's a transient solution at best. If the lobbyists don't get to twist the legislation in the first place, they'll just keep lobbying until they get their way. They have the money and organization to make it happen.

hogFeast(10000) 4 days ago [-]

I come at this from the other point of view: I used to work as an equity analyst (analysing companies), and I ended up gravitating towards retail.

The issue, as you imply, isn't only that Amazon is very good but the experience at many physical retailers is very poor. It is difficult to simplify this down to one thing imo.

Managers in physical retail are unusually bad. Retail used to be ludicrously profitable, so most companies have a dense layer of MBAs who have no real idea how to adapt or innovate.

I recently read The Secret Life of Groceries (not great tbh, but did cover some useful themes) and towards the end of the book (paraphrasing), it is framed as grocers all 'compete' to be the best version of the same thing. They strip all the difference out of their product, usually compete solely on price, and (ofc) someone eventually comes in and undercuts them. That is a failure of incentives and management.

This varies by industry, and is not limited to incentives/management. One reason for the lack of innovation in sports apparel is that there are basically two suppliers, and one of them is moving heavily into DTC. Every sportswear shop is just a Nike distributor, so there is no real differentiation there (the only innovation in the sector has been distributors moving up the chain like Sports Direct and Decathlon in Europe). So the reason why you can't find shoes cheaper is actually because of Nike, not distributors (and those distributors lack any capacity to innovate, management is mostly composed of MBAs who likely have worked at Nike or Adidas at some point).

But the solution is counter-intuitive: keep buying from Amazon. There is nothing structural or inevitable about Amazon's success (compare them with the large Chinese retailers, they actually look quite blundering and incompetent, they have made mistakes in distribution already that are going to choke them). Physical retail needs more innovation which can only come through firms dying, and entrepreneurs thinking about what consumers want (this happened with WMT, there was consolidation then competition as WMT got overrun by MBAs and they lost their edge...Tesco in the UK is a very extreme example of this too).

I wouldn't be pessimistic either: the distinction between online and offline retail really doesn't exist. Look at restaurants like CMG, they are taking most of their orders online...but that doesn't change the product. It is the same with retail: taking an order online doesn't change the fact that the retailer is holding some product somewhere, and is distributing that to you (this is the mistake that Nike is making, they are going into DTC thinking they just can just cut everyone out, and jam up prices...it is MBA, day one strategy, and idiotically wrong). The real difference with offline retail is actually the cost of property, which is going to narrow over time. Ofc, this isn't universal...some verticals like hardware stores are ready-to-go already, others like clothing probably aren't (there isn't much value-add at consumer contact, and they pay v high rents)...but the innovation will come. I don't think physical retail is dead at all though. If anything the weakness of physical retail is that MBAs stripped all the life out of it which left them open to competition from online. Online is just delivering the message that consumers have had enough.

EDIT: I will add that personally I think a lot of the stuff on Amazon is terrible. A lot of retailers in the 2000s were just innovating by going deeper into China, and closer to factories. Amazon just took that to it's logical conclusion. It works for some products, not for everything. Branded stuff also tends to be fake.

jliptzin(10000) 4 days ago [-]

We can start by taxing enormous corporations more, not less, and direct that tax revenue to the benefit of local communities, not the military or some general federal slush fund. If we're not going to go after monopolies then least we could do is separate companies with >$50 billion in quarterly revenue and tax them differently than everyone else. Obviously they wield a ton of power to get to that point.

TedDoesntTalk(10000) 4 days ago [-]

Buy directly from the manufacturer's website whenever possible. It's not hard, especially for big-label names like those who make sneakers.

ethbr0(10000) 4 days ago [-]

Amazon is Amazon because they didn't let size dull their edge.

Unfortunately, that cuts both ways. There are a lot of ways to be exploitive, to your and your customers' benefits, when you're a $113 B revenue/quarter company, that simply aren't available when you're a startup.

Hell, Walmart pioneered the 'How'd you like to sell in our stores?' + 'You need to reduce prices, or it'd be a shame if we, your biggest customer, had to drop you' two step. And Amazon pioneered hyperscale logistics efficiencies. Both of which only work if you're giant.

If we want a return to competition of yore, I think it's only going to happen if we (a) prevent 'extra-large' companies from having in-house logistics & (b) prevent predatory contracts and pricing when a size disparity exists (e.g. Walmart/supplier).

And given both of these are pretty fundamental to the way many companies work, I'm not even sure they'd be feasible.

hosh(10000) 4 days ago [-]

That is the heart of "Aggregation Theory" from the stratechary blog. That these companies amass their market power by making their product so easy to use, they aggregate demand and are able to squeeze suppliers. They hold monopolistic power, but it is hard to argue that they "harm consumers". Unlike old school monopolies, consumers go to aggregators because they want to, not because they are forced to.

eplanit(10000) 4 days ago [-]

I have the same feelings. I'm really fine with Amazon -- but I'd be much more fine by seeing them reduced via antitrust actions, with the goal of seeing more competition.

First, I'd like to separate AWS from Amazon the Bazaar. It's just too much control over _both_ the Internet (including all other e-commerce) and retail merchandise commerce.

It's like they own the railroads, the ports, the trucks, and the stores -- we've seen the Robber Baron movie already.

wildrhythms(10000) 3 days ago [-]

Consumer-side protest has never worked. Organized labor is the only way for workers to effect change at the workplace.

vmception(10000) 4 days ago [-]

> So aside from "just stop buying from Amazon" what can we do

There are two issues here: One is that you want to boycott a company that treats its employees bad, and two is that you buying local makes money circulate in the local economy.

Make sure to decouple those.

I like to play a game of 'how is this retail employee going to lie to me today' when I walk in any brick and mortar establishment. Its entertainment.

hattmall(10000) 4 days ago [-]

Don't stop buying, keep buying, start returning. Buy things that are sold by Amazon and return them make accounts for free trials, buy things and return them, become unprofitable then figure out how to circumvent their bans!

jvanderbot(10000) 3 days ago [-]

That store should have let you try on stock at your leisure, let you pick colors, styles, etc, then facilitated delivery overnight to your home with an in store return option with 1 year discounted upgrade / trade in. No more out of stock BS.

It's a comedy of uncreative errors that leads us to zero in store competition with Amazon. Target isn't bad!

tacocataco(10000) 4 days ago [-]

I was under the impression that Amazon's web hosting drove it's profitability.

TheSoftwareGuy(10000) 4 days ago [-]

> Employees are horribly treated, wages suppressed, and all sorts of terrible and abusive practices

We need to make those things illegal and we need to make sure those laws are enforced. This is the greatest downfall of capitalism, morals cannot be enforced by consumers because business operations are completely opaque to them. No company should be able to outcompete another by using such terrible, exploitative practices.

orangegreen(10000) 4 days ago [-]

You can use eBay if you don't want to use Amazon. Plus, you can even sell your stuff on eBay when you're done using it. I've never been an Amazon customer and don't plan on ever being one. I buy most things used too, saving lots of money along the way.

taurath(10000) 4 days ago [-]

> So aside from "just stop buying from Amazon" what can we do ?

Lobby your congressperson to break up Amazon and all the other big tech companies. Prevent them from bundling, vertically integrating, and using loss leading products to make all competitors not competitive.

Make their delivery network its own company who takes orders from other suppliers. Make their storefront and warehouses its own entity. Make their media organization stand on its own. People smarter than me can figure this out.

Make policy that punishes national and international companies and favors local businesses, and keeps the taxbase local rather than in Delaware.

Ultimately there's little that one can do with this hyper concentrated economy other than push for and join the political wave against concentration.

agumonkey(10000) 3 days ago [-]

how much of amazon is causing the mortar stores to sell things higher than necessary ? it's common that when a competitor swipes the rug, the old guys have to increase prices to survive (a little longer)

hasmanean(10000) 3 days ago [-]

Both of them pay low wages but retail stores have to pay rent for their facilities. Amazon doesn't.

jareklupinski(10000) 4 days ago [-]

try ordering Allbirds directly from their website or in their stores, I'm a 14-15 too, they're in the $100 range but they're pretty stylish imo and last a while

chubot(10000) 4 days ago [-]

For diversity, I buy from target.com. They don't have the selection but sometimes that's what you want. I don't need to look through 20 brands of tissue. Their prices and shipping are generally on par with Amazon (I assume they are forgoing profit to get loyal customers)

So basically I would try going to target first, then Amazon. Or Newegg or B&H first, then Amazon. There are other retailers of course but those tend to have operational competence, which is hard because Amazon raised the bar.

masterof0(10000) 4 days ago [-]

> So aside from "just stop buying from Amazon" what can we do ? Create a better service, sell cheaper and better quality products, etc. Is it easy? Of course not. But what do you want to do? Why would I buy product X more expensive at another store? The convenience is not addictive, is just good. People who don't have SDE salaries can afford the things they need at amazon, because are cheaper. Most people will find the best deals, unless you know a better place, I don't think you can do anything about it.

vadym909(10000) 3 days ago [-]

You wait

Product Makers will at some point realize they can sell direct and do it without AMZN. e.g. Nike, Apple

Competitors- Dick's, Best Buy will give product makers ultimatums- if we get undercut by AMZn, we're not going to sell your stuff- so you decide. Its nice to be able to buy and pickup curbside same day

Customers- I already find Amzn beating prices on eBay about 25% of the time. eBay shipping times are improving too

nixpulvis(10000) 4 days ago [-]

Fact is, a lot of times that extra 50% you pay will come back in lifetime of the product, if you know how to shop for it. And shopping in person often makes it a lot easier to assess quality. Especially when Amazon orders turn out to be straight up forgeries.

mywacaday(10000) 4 days ago [-]

Lots of options, little will power, consumer and political. Unions, monopoly legislation, tie CEO salary to a multiple of employee salary and probably 100 other smarter ideas than those.

arbuge(10000) 3 days ago [-]

I buy all my sneakers from Ebay (new, not used) and they never cost more than the price you mentioned. There are definitely alternatives.

Zanneth(10000) 3 days ago [-]

$15/hr, working indoors in an air conditioned building, in relatively safe conditions, is an absolutely amazing deal for low skilled workers in most parts of the country. Not to say we couldn't do better, but it always helps to see from the perspective of those who are less fortunate to understand why they choose to work at places like Amazon.

allturtles(10000) 4 days ago [-]

I'm increasingly put out by Amazon and am trying to stop the habit of shopping there. I like books, and Amazon was founded on books, but the way they ship books now (typically loose in a soft envelope or mostly empty box) means that >50% of the books I've ordered recently have arrived damaged and had to be returned. They used to shrinkwrap books to a cardboard plate inside the box.

There are other problems: 1) Search is just terrible. Often I have to search in Google to find the product I'm looking for at Amazon. The 'other people bought/looked at these items' functionality which partially made up for bad search has been pushed out in favor of sponsored products (i.e. ads).

2) Shipping is only fast and cheap if you get Prime, which basically means paying for your shipping in advance and buying constantly at Amazon to amortize your initial investment.

3) Because Amazon no longer actually controls its own catalog, duplicate listings, misleading listings, merged listings that amalgamate multiple different editions of the same book, etc. abound. e.g. search for 'Norton Anthology of English Literature'. Instead of a neatly sorted list by volume/edition/condition, you get a whole mess of duplicate/overlapping listings, and also misleading garbage like this (shows 3 books but you only get 1 of them): https://www.amazon.com/Norton-Anthology-English-Literature-P....

I'm shifting towards just using Amazon as a 'wishlist' shopping cart and then finding the actual thing to buy elsewhere.

hombre_fatal(10000) 4 days ago [-]

> I bought then for $35. They will arrive tomorrow.

Shouldn't you wait to see what kind of cheapo $35 shoe arrives tomorrow, one you never got to try on, before you celebrate?

macintux(10000) 4 days ago [-]

FWIW, when I wanted a specific pair of shoes, I ordered them directly from the brand.

It's very convenient to buy from Amazon, but it's hardly a hardship to not do so.

ravenstine(10000) 4 days ago [-]

I'd buy more often from smaller businesses if it didn't take most of them 3 days to merely put the shipping label on the box, let alone actually ship it. Amazon is top dog because they know people want a shot at getting items on the same day or next day. Almost no other online business can match that besides maybe Walmart, but they're vastly inferior and their fast shipping rarely pans out.

munificent(10000) 4 days ago [-]

> So aside from "just stop buying from Amazon" what can we do ?

As far as I can see, there are only two forces potentially big enough to fight a big corporation today:

1. Another equally big corporation.

2. The federal government.

The fact that US anti-trust enforcement has been essentially non-existent means that #1 is almost gone these days. Citizens United allowed corporations to buy politicians, so it seems that #2 is dead too.

It sucks.

pyrale(10000) 4 days ago [-]

Amazon is a poster child for antitrust litigation. They're publicly known for using their dominant position to compete with their suppliers, and drive their competitors out of business with price dumping. There is little chance you'll find alternatives to Amazon if they're allowed to destroy competition.

Regulation can only go so far, it can help build and stabilize a healthy ecosystem but it can't help with a fully consolidated business like with a decade of experience in killing competition.

There's a good case to be made that they should get the standard oil treatment.

tvirosi(10000) 4 days ago [-]

Stop being a slave to your addictions and form some principles. Come on.

r00fus(10000) 4 days ago [-]

There are other megastores with options other than Amazon. Target, Walmart, etc.

I have found Amazon to be not the best with clothing/shoes unless you're buying the same shoes over and over - the pricing in that case is often not competitive either.

nunez(10000) 4 days ago [-]

The reason why I'm personally okay with buying stuff from the Everything Store is because [1] retail is a brutal business, [2] _everyone_ in retail (except perhaps the executive leadership) is treated like shit, and [3] Amazon, for all of their failings, is still absolutely obsessed with customer satisfaction.

Amazon is still one of few companies that has a completely seamless, no-BS return policy, for example.

The 'sensible' alternative is to shop direct from small businesses, which I do sometimes, but see [2] from above.

fridif(10000) 4 days ago [-]

>'So aside from just stop buying from Amazon, what can we do?'

Lower the barriers to entry for business. It is not easy to comply with regulations and reporting.

A scrappy startup can replace Amazon if they can focus on things that are not burdensome and arbitrary, like rent + insurance + tax reporting + legal.

missedthecue(10000) 4 days ago [-]

Why do you accuse them of suppressing wages when they lead the way to $15/hr, and when they literally spend money lobbying congress to raise the federal minimum wage?


humaniania(10000) 4 days ago [-]

Advocate to people that you know about joining the boycott. Stop choosing the cheaper and more convenient option, because you're supporting abuse.

PostThisTooFast(10000) 3 days ago [-]

Yeah. Amazon is disgusting and essentially criminal, stealing designs from people and reproducing them as their own (see the Peak Design story).

But then you go to a brick-and-mortar store and they don't have shit. I'm talking middle-of-the road sizes and products... nothing. I can't feel sorry for businesses that can't learn basic inventory control.

ransom1538(10000) 4 days ago [-]

'So aside from "just stop buying from Amazon" what can we do ?'

IMHO, the system is [china builds things] -> [middle man sells it on amazon] -> [consumer buys it on amazon]. Amazon needs these middle man people and has starting cutting them out by producing their own lines to create even more profit. The only way Amazon will die is if the graph turns into: [china builds things] -> [consumer buys it]. I know, I know, i hear everyone saying: that is impossible. But I don't think it is and I think that is coming.

bookofsand(10000) 3 days ago [-]


sharkmerry(10000) 4 days ago [-]

> The US has a population of 261 million and an employed non-farm workforce of 145 million, per the BLS.

>According to the most recent US employment report, there are 145.8 million nonfarm payroll workers out of a total population of 332 million.

ignoring the mismatching 'Populations'. (261 million seems to be 'Civilian noninstitutional population')

This [0] seems to say there are 152,283,000 employed in US.

are there really ~6.4 million people working on farms in US? i thought farm work was <1% of employed peoples.

burkaman(10000) 4 days ago [-]

Nonfarm also excludes military workers and non-profit employees, and a couple other categories.

runnerup(10000) 4 days ago [-]

Probably has to do with precise definitions of 'non farm', 'farmer', 'farming' and 'agricultural'.

BLS has this showing 2.3 million people working in agriculture: https://www.bls.gov/cps/cpsaat15.htm

And this showing there are 900,000 jobs for 'Agricultural worker' https://www.bls.gov/ooh/farming-fishing-and-forestry/mobile/...

Clearly the terms 'Employed persons in agriculture industries' and 'agricultural workers' have definitions that diverge much, much more than I would have thought as a layperson.

bluedino(10000) 4 days ago [-]

There are 328 million according to the Census


morelandjs(10000) 4 days ago [-]

I'm not sure why everyone continually romanticizes brick and mortar retail. Its terribly inefficient, and wasteful of time, energy, space etc.

Think of all the Walmart, Dicks, Big Lots parking lots and strip malls that could be converted into better space. Think of how much waste there is when you have to pack, unpack, stage and repack merchandise.

Amazon's distribution is a superior business model which is why it is popular. I'd also reckon that the carbon footprint per package is lower if you account for the driving that is required for more traditional shopping.

I'd rather see more competition using a similar business model than a return to concrete strip malls full of big box retailers.

annoyingnoob(10000) 4 days ago [-]

Amazon is putting in a new warehouse near where I live. They bought up farm land and are converting into a crazy large building that spans the distance between to major roads. Its going to impact traffic on both of those roads, construction already has. We'll probably need a bigger bridge on one of those roads too. It will also have a huge parking lot for the people that will work there. This building is bigger than Walmart, Dicks, and Home Depot combined. I don't see how this is an improvement, just another big addition.

8note(10000) 3 days ago [-]

Brick and mortar retail is an advertising scam from the last century. It's built to waste people's time by making them walk by different products to get to the things they want.

grandvoye(10000) 4 days ago [-]

And McDonalds feeds 1% of the world every day.

punnerud(10000) 4 days ago [-]

And their main business is not food, but property: https://www.google.no/amp/s/qz.com/965779/mcdonalds-isnt-rea...

cs702(10000) 4 days ago [-]

That makes Amazon #3, after the federal government and Walmart:

     Entity        US Employees
  1. US Government         2.7M
  2. Walmart               1.6M
  3. Amazon                1.0M
If Amazon continues to grow at current rates, it will surpass Walmart's figure within 2 years.

Looking at these figures, it's evident that these three entities are far larger, wealthier, more connected, and likely more powerful than the vast majority of US cities, the vast majority of small countries in the world, and maybe even a few smaller US states.

tptacek(10000) 4 days ago [-]

Is this level of concentration a totally new phenomenon? In the 1970s GM employed almost 700k people, and the US population like 30% lower. Is it just different companies every generation?

polote(10000) 4 days ago [-]

> likely more powerful than the vast majority of US cities, the vast majority of small countries

Indeed there are not a lot of countries/cities that are wealthy/powerful enough to send people in space

postmeta(10000) 4 days ago [-]

'Across the U.S., nearly 24 million people—a little over 15% of the workforce—are involved in military, public, and national service at the local, state and federal levels. Of this number, approximately 16 million are employed in state and local governments. The federal government numbers include active duty military personnel and U.S. Postal Service workers. The U.S. military has about 1.4 million active duty service members and another 800,000 reserve forces. There are approximately 800,000 postal workers. Beyond the military and the postal service, 2 million people—just over 1% of the U.S. workforce or 0.6% of the total population—are permanently employed by the federal government. More than 70% of the federal workforce serves in defense and security agencies like the Department of Defense, the intelligence community agencies, and NASA.

Contrary to popular belief in the bloated growth of the U.S. public sector, the size of the federal government proportionate to the total U.S. population has significantly decreased over the last 50 years. It has also shrunk in absolute numbers in terms of both the full-time and part-time workforce. If we compare the size of the U.S. public sector as a percentage of the total workforce with other advanced countries, the U.S. is often smaller than its European counterparts, including the United Kingdom, although larger than Japan, which has one of the smallest public sectors internationally. In stark contrast, 40% of the workforce in Russia is employed in the public sector. In Europe, the optimal size of government is equally hotly debated, while in Russia, the size of the government and the dependency that this generates within the workforce tends to mute critical commentary.' https://www.brookings.edu/policy2020/votervital/public-servi...

hansvm(10000) 4 days ago [-]

Not just a few, Amazon has more workers than any of the 10 smallest states and more revenue than the GDP of the smallest 30 (individually, not summed).

PaulDavisThe1st(10000) 4 days ago [-]

The law grants cities powers that corporations do not have. Corporations are only more powerful if wealth can be used to subvert the rule of law.

And that never happens. /s

anotheraccount9(10000) 3 days ago [-]

I'm terribly sorry to bring this up this way but: if Amazon is a bad employer, why are so many people working for this business? Is it because it's better than jobs with comparable wages (are others similar?)

dannyw(10000) 3 days ago [-]

It pays better than other unskilled jobs, offers actual benefits like healthcare, and it's predictable but corporatized abuse.

It's not like a small retail shop where the manager gives you less shifts after you refuse a sexual advance, or a restauranta where you're working 3 more hours than your wage.

speedgoose(10000) 3 days ago [-]

People can work for awful companies. One example is the 35 suicides in a short period at France Télécom : https://fr.m.wikipedia.org/wiki/Affaire_France_T%C3%A9l%C3%A...

narrator(10000) 4 days ago [-]

In the spirit of the FSF's 'Right to Read'[1] dystopia story, I present 'Fully Automated Amazon Communism' :

1. Everyone works at Amazon

2. Amazon has vertically integrated into every conceivable industry.

3. Everyone gets paid in Amazon gift cards.

4. Amazon automatically delivers to your home everything you need to live your life without you having to ask. It knows what to order based on an AI model of everything you have ever done or thought. Your level of consumption is automatically scaled to your gift card balance.

5. You rent everything that's not a consumable from Amazon.

6. If you quit your job at Amazon, you starve to death. You must even return your clothes because your license to them has been canceled. You could try and live in the woods and eat nuts and berries. Using someone else's Amazon prime account is punishable by death since that's the practical consequence of getting fired from your job at Amazon. The right of first sale has been abolished for all goods, so even if someone wanted to give you food, they don't have a product license to do that.

[1] https://en.wikipedia.org/wiki/The_Right_to_Read

rantwasp(10000) 4 days ago [-]

you're being downvoter and I don't understand why.

Here are a few more things for your list:

7. Amazon monitors everything you do online and offline (they do provide the backbone and all of ISP services). Corrective action is taken if needed

8. Amazon decides who get to live and who gets to die based on your predicted future value. Also, Amazon decides who gets to reproduce.

novok(10000) 4 days ago [-]

Kind of reminds me of the corpo start in cyberpunk 2077

KoftaBob(10000) 4 days ago [-]

So 0.65%? Not insignificant, but using "1 out of every 153" seems intentionally worded to sound more outsized and draw eyes.

I wish HN posters would stop encouraging this lowbrow form of journalism. For an educated community, people sure do love their cheap clickbait headlines here.

glasss(10000) 4 days ago [-]

I think 0.65 of the country sounds similar in scale to me, and I would also naturally be curious as to what that means in terms of 'how many people out of X'

Historical Discussions: Activision Blizzard Hires Notorious Union-Busting Firm WilmerHale (July 29, 2021: 627 points)

(627) Activision Blizzard Hires Notorious Union-Busting Firm WilmerHale

627 points 5 days ago by dv_dt in 10000th position

www.promethean.news | Estimated reading time – 3 minutes | comments | anchor

Yesterday, July the 28th, many of Activision's 9,500 workers walked off the job to protest the culture of harassment and discrimination present at the company. Activision Blizzard is currently being sued by the state of California over alleged sexual harassment and 'frat boy' culture. The complaint from California asserts that "[f]emale employees receive lower starting pay and also earn less than male employees for substantially similar work."

The lawsuit shows wide-ranging discrimination towards women from the company. The complaint notes that the company is "only 20 percent women." Its top leadership is also "exclusively male and white," and that "very few women ever reach top roles in the company." The women who do reach higher roles earn less salary, incentive pay, and total compensation than their male peers, as evidenced in Activision's own records.

Activision has called in the experts to put down the claims of sexual harassment and discrimination and stop the protests by workers. Activision has hired WilmerHale. WilmerHale has been hired to "review" Activision's policies. Wilmerhale's own site advertises its expertise as "union awareness and avoidance." They use attorneys and experts to develop "union avoidance strategies and union organizational campaigns." WilmerHale was used extensively by Amazon to spread anti-union misinformation and propaganda to "sow doubts about the unionization drive." WilmerHale was the firm that killed unionization efforts at an Amazon center in Bessemer, Alabama.

WilmerHale says its practices help "clients minimize liability so they can stay focused on their business objectives." That means your company can exploit its workers and weasel its way out of any accountability with WilmerHale's army of lawyers. WilmerHale is the Pinkertons of our time. They swapped out their rifles and shotguns for court affidavits to threaten workers into silence. Activision is facing the crop it sowed, for years it created a culture of sexual harassment and discrimination, and now that it's time for justice they call in the big guns of the labor law world to duck out of accountability. Any worker at any company deserves better than watching their boss abuse them then sneak out of any justice thanks to being outmaneuvered in the legal world. At WilmerHale, 1st-year counsel makes $350,000. The 1st year associate walks into the office in a $1,500 Burberry suit and makes $202,500 that year. Those Amazon workers they swindled out of higher wages make less than $30,000 a year.

Are we going to watch as that same insult to justice happens at Activision now?

All Comments: [-] | anchor

post_break(10000) 5 days ago [-]

The problem I see with them is the brain drain if the influential people leave. They have options at other companies, and if enough people leave Blizzard they may be stuck. But this is just me looking from the outside in. This also signals they are scared of them unionizing.

haha1234__(10000) 5 days ago [-]

Shouldn't influential people push for better working conditions with all their influence?

failuser(10000) 5 days ago [-]

Are there any influential people that might be on the receiving end of the frat boy culture?

babyblueblanket(10000) 5 days ago [-]

I already heard several lead devs have left + took their teams with them to start new studios.

zamalek(10000) 5 days ago [-]

I'm really interested to see what happens here. Amazon busted the efforts of people with few options, people who could be threatened with bullshit/propaganda. I wonder if big law is sufficiently aware of the substantial impact that the recent years of ethical discussion have had on the gamer and game dev psyche. No matter the individual ethical stance of an employee, this sub-culture has had practice critiquing words and ideas.

It looks like the employees are almost universally on the equality end of the ethical spectrum (given how widespread the walkout was), which is even worse news for union busting.

tvirosi(10000) 5 days ago [-]

I know we don't want to say this but there might be a possibility some of the talent secretly are indeed sexist (wouldn't surprise me in the gaming world) and thus actually prefer to stay at a place that preserves the male dominated culture. That might be part of the strategy behind activisions anti walkout move here. Just a thought (I don't know).

aejnsn(10000) 5 days ago [-]

Well this company is DONE.

dylan604(10000) 5 days ago [-]

Mark this on your calendar and see how well it ages. Companies have survived much worse.

pm90(10000) 5 days ago [-]

Is there any way to break the stranglehold of large studios and enable smaller studios to build high quality games? I loved Blizzard back when they made their OG triad (Diablo, Warcraft, Starcraft) now it just seems indistinguishable from...EA. How can we enable more scrappy Blizzards?

t-writescode(10000) 5 days ago [-]

Buy indie games.

That's it.

Buy indie games, usually in Early Access if the game is something you like or would like.

Phasmophobia has made millions. Gunfire Reborn has made millions. Small game companies are more popular than ever. There's even a game about powerwashing out right now.

indigochill(10000) 5 days ago [-]

There are plenty of high quality games from small studios. Everything Supergiant Games makes is gold, with excellent art direction and sountrack. Klei Entertainment also has a solid track record. If you're looking for the AAA cinematic aesthetic from a small team, let me introduce you to Senua's Sacrifice (back before the team that made it got bought out by Microsoft). And lest we forget, Stardew Valley was unbelievably successful and the passion project of one guy (who's even self-publishing now).

Additionally, between UE, Unity, and Godot, I don't feel like it's a lack of tools holding studios back. If you wanted even more games (but why - we're already drowning in a flood of them), one of the big steps might be to provide a stable funding solution for new developers so they didn't need to play financial Russian Roulette to create their art.

ThrowawayR2(10000) 5 days ago [-]

> 'Is there any way to break the stranglehold of large studios and enable smaller studios to build high quality games?'

Game engines are now cheap/free but what deep pockets buy are more and better artwork, level design, story writing, voice acting, sound design, background music, motion capture, etc. that are the hallmarks of AAA games. While small studios can (and do!) build great games, they can't replicate that because of the cost and I don't see a way to work around that. There's no way to automate artistry.

cratermoon(10000) 5 days ago [-]

Are you familiar with Frost Giant Studios? https://www.frostgiant.com/

babelfish(10000) 5 days ago [-]

Give your money to smaller studios! Check out indie gaming marketplaces like itch.io

failuser(10000) 5 days ago [-]

I think it's easier than ever. They engines are free and even individual developers can build incredible games. The harder part might be getting enough publicity.

rapsey(10000) 5 days ago [-]

Supergiant makes really awesome and highly acclaimed games.

blibble(10000) 5 days ago [-]

there's hundreds of fun games made by smaller studios

you can avoid EA, Blizzard and Ubisoft very easily these days

MontagFTB(10000) 5 days ago [-]

Runic Games (https://www.runicgames.com/) was also founded by a bunch of ex-Blizzard folk. The Torchlight series of games are great, and hold well to their Diablo-like roots.

pkaye(10000) 5 days ago [-]

Check out TheLazyPeon on YouTube who evaluates new MMORPG games all the time. Lots of great looking games coming out all the time but many of them never attract the critical mass of users to make them interesting.

cybwraith(10000) 5 days ago [-]

Check out Grim Dawn if you want that old school Diablo feel

nyanpasu64(10000) 5 days ago [-]

I've never heard of this news group before, all they offer for contact is a Gmail address with no names, and this domain doesn't even show up when I search Promethean News. I have my doubts about this website.

rareform(10000) 5 days ago [-]

I can't even access this site at my University due to it being a newly registered domain. From WHOIS: `Creation Date: 2021-06-29T18:05:15`

jdmoreira(10000) 5 days ago [-]

What world is this where companies hire union-busting mercenaries? Is this the Pinkertons in 2021? What kind of lawless place is the US where you can hire services to stop unionisation?

hemloc_io(10000) 5 days ago [-]

Ha! The Pinkertons are still around in 2021, and working quite hard.


leereeves(10000) 5 days ago [-]

> What world is this were companies hire union-busting mercenaries? Is this the Pinkertons in 2021?

Excellent example of how this headline can mislead people. They're lawyers 'hired to review Activision's policies', not armed thugs who bust union skulls.

screye(10000) 5 days ago [-]

I am usually against unions in tech because the free market tends to allow a proper balance of supply and demand. However, gaming might just be the perfect sub-domain of tech to benefit from unionization.

The supply is massively saturated, with dreamy eyed programmers ready to give their lives to work in gaming. On the other hand, a very small group actually gets to make any of the creative decisions that every gamer has dreamt of making. The wages are below what the market pays, hours are exploitative and the companies are making massive profits. It is really difficult to compete with the AAA studios, because Indie games take years to make and working without a wage for a decade isn't possible for many. On the other hand, medium sized and successful indie studios get acquired before they can grow to a decent enough size to serve as any real competition. There is also a lot of shady practices with monetization, where the employees do not get a say if the company should or shouldn't indulge in said practices.

We've already seen soft-unionization of this type in a similar industry : media production. It works.

AAA game development in the US has been quite stale for the last decade. The big EA-Blizzard-Activision-Ubisoft have not come up with a single quality game in this time. Even Bethesda, Bioware seems to be on a decline. Rockstar,Valve and Id software seem to be doing fine, but no where close to the hit-after-hit that certain japanese studios are producing. Naughty Dog and Super giant would be the only 2 American studios creating 10/10s consistently, but both have an order of magnitude fewer employees than standard AAA studios.

The industry is in dire need of shake up. I hope this goes the employee's way this time around. It's about time.

nynx(10000) 5 days ago [-]

What exactly do you mean by proper balance? What you said later in the comment basically applies to all large companies:

> The wages are below what the market pays, hours are exploitative and the companies are making massive profits

Frondo(10000) 5 days ago [-]

Unions are a part of the free market, unless you'd like to remove people's freedom to associate just because they all happen to work at the same place.

okhuman(10000) 5 days ago [-]

> media production. It works.

Does it work? Wasn't it medium (or someone) who when faced with unionization pivoted the company (left severance packages) to a substack like model?

Manuel_D(10000) 5 days ago [-]

The supply being massively saturated is also what makes a union impossible to form. The over-saturation of supply means there's plenty of people willing to work a non-union game dev job because the alternative is no job.

Also the EA-Blizzard-Activision-Ubisoft studios do consistently deliver quality games: COD, Battlefield, Assassin's Creed, Far Cry, etc. Formulaic, perhaps, but it's tough to claim that these aren't quality and popular games. Valve, on the other hand, have only produced Half Life: Alyx in the last decade (you could include Artifact, but it flopped).

elicash(10000) 5 days ago [-]

Collective bargaining is not at all incompatible with a 'free market.' It's just a smarter way to negotiate (more power collectively) and the end result is a contract between a company and a group of workers. It's not like, say, a minimum wage where government sets a price.

seany(10000) 5 days ago [-]

I've never understood why people are confused or upset by these kinds of things. It's almost literally not in the companies best interest to just 'let it happen' so why wouldn't they fight it?

happytoexplain(10000) 5 days ago [-]

Not trying to be smart by using your words: I legitimately have never understood why this comment is so common, considering how apparently obvious the fallacy of it is. You didn't imply 'the company must do X', you implied 'people shouldn't be upset that the company did X'. The former is a totally reasonable argument to be made. The latter makes absolutely no sense (without extenuating context). You're not really confused as to why people are upset about something a company does that negatively affects people, either in actuality or only in perception, simply because it is in the company's best interests, are you? It's pretty normal for a human not to care about the hazy future consequences of a company hurting itself by embracing some external morality over its own interests as a company, and, depending on the specifics of the case, it can even be pretty reasonable not to care.

ocdtrekkie(10000) 5 days ago [-]

It probably depends in some part on the replaceability of your workforce: Burning your workers isn't a big deal if they're cashiers. But if they're software engineers, burning the company's reputation with them may not be in the company's best interest: You might end up with a company that has no unionizers but also lost all its key talent.

pfranz(10000) 5 days ago [-]

I'm confused why you're confused. Making it more adversarial when it's obvious the company is in the wrong and has been negligent for so long is a great way to continue to burn your talent and reputation and doesn't sound like a great way to solve the long term problem, either. Being a public facing company a large part of this is public relations. Mediating or looking for good faith resolutions is what I would expect as a consumer or potential employee.

In 1982 someone laced Tylenol with potassium cyanide and seven people died. Johnson & Johnson within 2 months released a triple-sealed package and had a nation-wide recall. Their market share dropped from 35% to 8%, but rebounded within a year. It's often talked about as a great PR response.

I doubt Activision Blizzard feels the need to be as responsive to this situation, but I don't see this as a great reaction.

unyttigfjelltol(10000) 5 days ago [-]

WilmerHale is not a notorious union-busting firm. The older set might know their work from A Civil Action, and their grandparents might remember the phrase that turned the red scare, 'At long last, have you left no sense of decency?'[1]

The marketing material is unfortunate, though, and the connection with Amazon labor work is notable.

[1] https://en.m.wikipedia.org/wiki/Wilmer_Cutler_Pickering_Hale...

tapoxi(10000) 5 days ago [-]

Also, what is this news source? Who are its writers?

I am by no means taking Activision's side here, but this website is sketchy.

juped(10000) 5 days ago [-]

yeah they're just biglaw, any biglaw firm does everything

cratermoon(10000) 5 days ago [-]

What does any of that have to do with whether or not the firm is notorious for union-busting? Representing a polluting corporation against citizens suing it for making them sick is 100% compatible with busting unions.

[Originally I had gotten this backwards and stated that WilmerHale represented the citizens]

belorn(10000) 5 days ago [-]

Looking at the Wikipedia article, I guess that the primary reason they were hired were their connection to government. When you are sued by the government, it useful to have lawyers that has experienced from working inside the government (and possible have political connections). The connection with Amazon labor work could be part of the decision, but I wonder if there aren't then more optimal choices if that was blizzards primary concern.

disposableuname(10000) 5 days ago [-]

They're a biglaw firm, which has its hands in many, many fields. If WilmerHale had anything resembling a specialty, it would corporate/finance law.

cletus(10000) 5 days ago [-]

The Steve Jobs quote on why Xerox failed [1] strikes again. The finance people have taken over Acti-Blizzard and they've been coasting for 10+ years on their original franchises. All we have is annual CoD releases and Blizzrad coasting on their old properties where Blizzrd hasn't had a significant original release in 10+ years.

This effort seems like it's part of the ruthless approach to controlling costs that slowly strangle a company from within.

I believe WoW is the #2 property (after CoD) at Acti-Blizzard and it's clearly changed from one of delivering a game to simply extracting as much money as possible from each customer much like how almost all mobile games do.

The state of California's complaint is bad. I mean really bad. The fact that 3-4 different people from AB all released different statements in the last week should tell you exactly how bad it is. That's classic panic mode. There should only be one.

This latest move tells you the company believes it will blow over and they're looking to do the minimal required to appease the detractors and get back to business as usual without having to pay people more or pay out a bunch of lawsuits.

Honestly, the heads of J Allen Brack and Bobby Kotick in particular should roll over this lawsuit.

[1]: https://www.youtube.com/watch?v=NlBjNmXvqIM

EDIT: commenters have noted (correctly) that I overlooked Overwatch. This was a significant release but it seems to also have waned in popularity and Overwatch 2 is inextricably going in some weird PVE direction.

Other than that you have a poor received Diablo sequel, a series of lackluster to bad WoW expansions, a disastrous Warcraft 3 remaster and complete abandonment of the RTS genre that propelled them to success in the first place.

WC3 Reforged ('Refunded') was significant in that it was not only underwhelming and plagued with problems it made the original game worse with a forced download and loss of functionality.

The most significant change however was Blizzard not wanting a repeat of missing the MOBA boat with Dota 2 by adding a condition that all the IP for third-party maps belong to Blizzard, completely killing that ecosystem.

johnnyanmac(10000) 3 days ago [-]

>This latest move tells you the company believes it will blow over and they're looking to do the minimal required to appease the detractors and get back to business as usual without having to pay people more or pay out a bunch of lawsuits.

it doesn't unless you know the exact peopel hired are the union busting ones. It sounds like this is their HR consultants.

People really need to read past their title. This is a century old firm that has been on all sides of the fence for a signifigant part of American history. They don't have an agenda to regress society (no more than the legal system does).

vkou(10000) 5 days ago [-]

If the adults in the room have 'taken over' Acti-Blizzard, it wouldn't have been ran like a frathouse.

The reality is that over the years, Activision has kept their hands off their golden goose, and that the company's rot originates with the original management team. Notice how most of the harassment, and the retaliation that followed took place under the leadership of Mike Morheim (And that he's been personally called out for the latter.)

bhelkey(10000) 5 days ago [-]

> The finance people have taken over Acti-Blizzard and they've been coasting for 10+ years on their original franchises. All we have is annual CoD releases and Blizzrad coasting on their old properties where Blizzrd hasn't had a significant original release in 10+ years.

Hearthstone was release March 11, 2014. Overwatch was released May 24, 2016. Both of these were hugely popular titles.

__john(10000) 5 days ago [-]

> Blizzrd hasn't had a significant original release in 10+ years

A quick search tells me that Overwatch has 6M players, which seems significant.

Panoramix(10000) 5 days ago [-]

Damn, I'm not an Apple fanboy but that soundclip by Jobs makes so much sense; I've seen that play out like a hundred times. Here's a related one by a former IBM CEO:


cratermoon(10000) 4 days ago [-]

> hasn't had a significant original release in 10+ years.

Let's talk about Heroes of the Storm. Not as a counter to your argument, but as a perfect illustration of your points.

First thing to know is that HotS started out as a Starcraft II arcade game. It uses the Starcraft II engine. Nothing new there.

Next, for those who don't know, HotS is a MOBA. Just like DotA (and DOTA 2), and League of Legends. DotA was a mod of Warcraft III: Reign of Chaos, created by the community. League of Legends is a standalone game inspired by DotA.

In HotS, all the player heroes are Warcraft, StarCraft and Diablo heroes. It's free-to-play. After a brief legal spat with Valve over rights to the name DotA and Blizzard trademarks, the two companies settled.

It's a decent game, but it's just repackaging Blizzard assets for a type of game that emerged out of Blizzard in the first place. To me, HotS is exactly the sort of thing a company coasting on its franchise would release.

Epitaph: AB completely abandoned the game three years after release, to the complete surprise of the small esports community that had grown up.

ashtonkem(10000) 5 days ago [-]

Union busting behavior is not that surprising, but talk about awful timing. Do the people at the top not consider the PR implications of this stuff?

failuser(10000) 5 days ago [-]

The might be banking on the "crush the SJWs" crowd. Who knows what their market research showed, but I think we will see soon enough.

minikites(10000) 5 days ago [-]

Not enough people are going to unsubscribe from WoW for it to make a difference, enough of the public is anti-union and many of the rest don't care.

dylan604(10000) 5 days ago [-]

If you're having a bad news cycle, why not just dump the rest of the trash with it. Let it all blend in together instead of clearing one cycle, then creating another. People can only focus on so much at one time. In a few months, people won't remember and order their next release.

danaris(10000) 5 days ago [-]

They're probably more worried about what happens if they have to actually face a unionized workforce than what the public might say about them hiring a union-busting law firm.

forz877(10000) 5 days ago [-]

These messages are quite common. 'What terrible PR, what are they thinking.'

The truth is? It doesn't matter nearly as much as people think. Companies have caught on - a vocal minority will call them out, but it won't matter. People will still buy the next COD. At the end of the day, that's all anyone cares about top to bottom.

lnxg33k1(10000) 5 days ago [-]

Is it something they need to be worried about? I mean Activision Blizzard has been shit all over the years, and kids still buy their games. On the other hand hiring a firm is not a proof of wrongdoing for the ongoing trial.

cbanek(10000) 5 days ago [-]

As a woman who worked for Blizzard in the past, I have to say, I'm very disappointed. I'll just leave it at that.

brailsafe(10000) 5 days ago [-]

What was your experience like? Surprised/not surprised?

filereaper(10000) 5 days ago [-]

Funny that WilmerHale has diversity and inclusion as a 'principle' while they actively undermine others and are paid to do so.

>'Our commitment to diversity and inclusion starts at the top and cascades throughout the firm. WilmerHale is one of very few AmLaw 100 firms with a woman co-managing partner, and part of an even smaller number of such firms that have had both a woman and a person of color as a co-managing partner. Our diversity and inclusion journey is one of continuous assessment, progress, and partnership with others committed to advancing these principles in the legal profession.'


TaylorAlexander(10000) 5 days ago [-]

Sadly the whole industry is like that. They make a lot of noise about 'doing better' but they still suppress any effort to gain worker power.

brendoelfrendo(10000) 5 days ago [-]

Interesting fact: the current director of World of Warcraft, Ion Hazzikostas, worked for WilmerHale before joining Blizzard.

gnicholas(10000) 5 days ago [-]

This makes it much less surprising that they picked WH to represent them. He would know the firm and their specialties, and they would have pitched him for this business.

What's really surprising is that a former BigLaw attorney is now the director of WoW!


brainfish(10000) 5 days ago [-]

I cut my teeth on Diablo, and played Diablo II for probably fifteen years after its release off and on as a way to stay connected with a friend who loved it similarly. More recently, I have consistently played Starcraft II since its release and enjoy a sense of mastery over that game unparalleled by my experience in any other.

I haven't purchased new Blizzard products since the Hong Kong censorship debacle[1] and quit playing Hearthstone at that time. However I had still played some of my other old favorites, reasoning that I was not providing them further financial support. The recent announcements about their terrible, sexist culture had challenged that notion for me, and I was not sure what to do.

This news is the straw that breaks my back. That Activision/Blizzard would double down on their despicable behavior and stance in this way is completely beyond the pale, and I for one will never again fire up those games that I loved so much.

Thanks for ruining that for me, Blizzard.

[1] https://en.wikipedia.org/wiki/Blitzchung_controversy

raxxorrax(10000) 4 days ago [-]

I think at the time when Starcraft 2 was released the company changed significantly.

I remember Bobby Kotick TD in SC 2. If he hits you, you lose money, if you hit him, you lose money too. It was banned after a short time.

I enjoyed SC2 very much, but I left their platform shortly after. Most of my friends stopped playing too.

beebmam(10000) 5 days ago [-]

I was sexually abused as a child, and having learned in the last week that Blizzard management has actively protected and covered up the sexual abuse and harassment that some of their high level employees have enacted on others, I have felt extremely sick to my stomach and it has been highly triggering for me.

Seeing that they're making literally no changes at all to management or executive leadership, I'm having a hard time describing the rage that I feel inside. These people who knew and covered up the harm deserve prison, not just being fired.

I've been a huge fan of Blizzard games since I was a child. When I see Blizzard pushing back by aiming to crush internal protest, these feelings I have about this corrupt anything good I ever felt for these games. These people in leadership positions and the HR department that covered this up are criminals and should be seen as such.

mcdevilkiller(10000) 5 days ago [-]

It was the developers that made those games, not 'the company'.

aeoleonn(10000) 5 days ago [-]

Check out Path of Exile 2 -- it's the spiritual successor to Diablo 2. And it's freemium

fidesomnes(10000) 5 days ago [-]

I bet you applauded when Twitter censored people they didn't like.

meristohm(10000) 5 days ago [-]

I got hooked on Diablo, and the memory of anticipating Diablo II (for which I built a computer) is almost tangible. WoW was the next drug/escape, and Hearthstone my last Blizzard slot-machine until I became a parent. I never thought I'd move away from games, and now I don't miss them, turning instead to gardening and exploring with my kid. I still have a lot of emotion bound up in that game time, and perhaps they were useful in the absence of a counselor (in person or through books) who could help me cultivate a sense of purpose.

I still play a couple games (Hell Let Loose as a 3-person tank crew is great) but only as a way to spend time with distant friends.

ScoobleDoodle(10000) 5 days ago [-]

I also haven't purchased any new Blizzard products since Blitzchung. I uninstalled the Activision Blizzard game launcher last night, hopefully others are doing the same and adding to the dent in their KPI scores and financial bottom line.

x3iv130f(10000) 5 days ago [-]

A company is only just a shell. It is the creative people that work for a company that make the actual products.

If you like a game, keep bookmarks on the people who made it and follow them around the industry.

kelnos(10000) 5 days ago [-]

I don't really understand this attitude, or line of reasoning, or whatever you want to call it.

Sure, if a company does something that you find reprehensible, not giving them further money (or attention) is certainly a reasonable -- and honorable! -- thing to do.

But if you've already purchased a standalone[0], non-subscription product from that company, and that company doesn't gain any benefit from your further use of that product (or lose anything from you stopping use), I feel like you're only hurting yourself if you stop using it.

I will concede that if the act of playing one of these standalone games makes you think of the bad thing the company did and makes you angry/upset, I guess it makes sense to stop playing them. But unless the bad thing they did is something personally/viscerally important to you, it feels like that's a bit of an odd trigger.

[0] If the game is multiplayer, and connects to a company-run server, I guess you could make the argument that they benefit in some way from their active-users numbers being higher. I personally don't find that argument all that compelling, but everyone can of course decide where the cutoff of benefit is for them.

acituan(10000) 5 days ago [-]

> I for one will never again fire up those games that I loved so much

That is a very confusing argument to me.

Not only you've already given your money to them and received your end of the transaction, whether you make use of it or not, is this the most robust way to conceptualize the identity of a corporation? No temporal limitations, no account for the actual people that make up the corporation at a given time?

Don't get me wrong, I'm not saying keep buying their games or give support to what they do but if the people who made your beloved Diablo aren't the same people responsible for today's shitshow, what's the point of your gesture?

The inverse concern applies too, VW, IBM, Hugo Boss among many other had affiliations with Nazi Germany. Should they be condemned today, if so for how long more? What determines the cutoff?

This is a classic Theseus's ship problem, if you change every board of a ship one board at a time, is it still the same ship, how do you define the identity functions.

Sounds like this is less about the identity of the corporation you affiliated with and more about identity of you through what you choose to/not to affiliate with.

dharmaturtle(10000) 5 days ago [-]

I've slowly stopped playing video games after college... but I deleted my Blizzard account after Blizzard doubled down on Blitzchung. Banning the casters for six months... that still gets me riled up. It makes zero sense to ban the casters for the player's conduct.

jmcgough(10000) 5 days ago [-]

Same - I was a heavy Hearthstone player since beta, and was ranked within the top 500 for about six months towards the end when I was pushing to compete. But the Blitzchung incident left such a bad taste in my mouth that I quit the game and haven't returned to it. They didn't just penalize him for what he did, they BURIED him and effectively ended his career.

If he'd held up a sign for ending apartheid in another country he probably would have gotten some small penalty, it was clearly motivated by Blizzard's relationship with China. And it was so over the top and unprecedented that a ton of casters and pro players spoke out against it.

adkadskhj(10000) 5 days ago [-]


jorgesborges(10000) 5 days ago [-]

I'm leaning the same direction although I can't bring myself to leave Starcraft. But who knows. For those unaware there's a new company Frost Giant Studios founded by some of the best game developers from Blizzard and they're devoted to creating the next big RTS [0]. One can speculate about their choice to depart from Blizzard and their reasons are probably myriad but it can't be unrelated to the horrible culture there. Here's an interview with some of them on The Pylon Show hosted by Artosis [1].



tarsinge(10000) 5 days ago [-]

The Blizzard that made Diablo and Diablo 2 is not the same as the Blizzard of the today, the key people have moved, so I don't see the issue with playing their older offline games.

alex_c(10000) 5 days ago [-]

'I haven't purchased new Blizzard products since the Hong Kong censorship debacle'

I don't think Blizzard has released any new products (other than game updates) since then anyway, so I expect most people can say the same :)

By the way, check out Grim Dawn with the Reign of Terror mod if you want a 'Blizzard-free' Diablo 2 experience.

Icathian(10000) 5 days ago [-]

My experience has been almost identical. What a crying shame that they've fallen this far.

dmead(10000) 5 days ago [-]

what the hell am i supposed to replace starcraft with?

runawaybottle(10000) 5 days ago [-]

They don't really care that much about the American market anymore really. Something like 90% of League of Legends players are in China. The mobile gaming market is massive over there, hence Diablo Immortal.

China does not care about the West's uproar over most things. Blizzard does not care if you buy Diablo 4, they care if the East buys Diablo 4 and Diablo Immortal.

megablast(10000) 5 days ago [-]

Wow. What a stance you've taken. You will no longer play some old games. Not every hero wears a cape.

loourr(10000) 5 days ago [-]

I've spent a lot of time hiring software developers and I usually receive about 25 male applicants for every 1 female applicant.

Achieving equal representation across the entire industry is going to be literally impossible without a huge influx of woman into the industry.

Further, because the big tech companies are pretending this is not the reality and strive to have equal representation in their workforce it means even sub par female developers are able to get jobs at the likes of google and facebook, further depleting the remainder of the workforce and causing wage inflation disincentivizing smaller or less well funded companies from hiring them because they can find better and cheaper male counterparts.

Not defending 'bro culture' but I think the industry needs to come to terms with the realities of the situations. Legal action will do nothing to change this.

myohmy(10000) 5 days ago [-]

Oh look, its the hiring manager at Activision! Good job confusing equality for equity. A decent manager could easily achieve gender equity with a '25:1' ratio. Unfortunately it seems that might be beyond your capacity as a sub par hiring manager

a1pulley(10000) 5 days ago [-]

> 'At WilmerHale 1st year counsel makes $350,000. The 24 year old 1st year summer associate still in law school walks into the office in a $1,500 Burberry suit and makes $202,500 that year. Those Amazon workers they swindled out of higher wages make less than $30,000 a year.'

The 200k figure is for a first-year associate who has graduated and passed the bar exam [1].

It's interesting to me that you can only hope to crack 300k at a top law firm after working for five years and going to law school for three. I.e., not until you're over 30.

The majority of SWEs at FAANG companies get there in two years. I.e., at age 24 [2]. Some might say the potential upside of becoming a partner outweighs higher early career SWE income, but I would retort that making director or VP are comparable accomplishments; not everyone makes partner.

Social status hasn't caught up to income, though; family and friends outside my tech circle still respect law and medicine more than engineering.

[1] https://www.wilmerhale.com/en/careers/lawyers/entry-level-or...

[2] levels.fyi

filmgirlcw(10000) 5 days ago [-]

The majority of SWEs at FAANG companies do not make that. Depending on company and location, they might get a sign-on stock grant that could theoretically get them approaching that, but that doesn't take into account vesting schedules and cliffs. It's assuming a best-case scenario and certainly not "the majority." And even then, a new hire low level SWE straight out of college isn't often going to get a $500k stock grant on-hire, no matter what levels says. And certainly no matter what Blind says.

boromi(10000) 5 days ago [-]

Those FAANG are also in HCOL locations. SWE roles in smaller towns and other states then the coasts don't command nearly as high a salary.

andreilys(10000) 5 days ago [-]

Not to mention the upside of owning equity in a startup that IPO's.

Plenty examples of doordash/airbnb/etc. SWE's with less than 5 YoE making +$1M TC thanks to their equity

jakear(10000) 5 days ago [-]

levels.fyi shows 300k+ salaries for senior engineers at FAANG's. Senior engineers are typically not 2 years out of college. Some exceptions I'm sure, but certainly not the majority.

cs702(10000) 5 days ago [-]

Clickbait title for a highly opinonated piece that would never see the light of day in a respectable publication like the Washington Post, WSJ, NYT, Economist, etc. WilmerHale is a well-known top-100 law firm with a long history -- it's definitely not a 'notorious union-busting firm.' I'm flagging this story because it doesn't belong on the front page of HN, IMHO.

Overton-Window(10000) 5 days ago [-]

> in a respectable publication like the Washington Post, WSJ, NYT, Economist, etc

Your assessment is a decade out of date.

gotostatement(10000) 5 days ago [-]

Their own website lists one of their areas of expertise as 'advising on union awareness and avoidance'

https://www.wilmerhale.com/en/solutions/labor-and-employment Under 'Executive and Workforce Training' tab

beebmam(10000) 5 days ago [-]

A quote from the article: ''Wilmerhale's own site advertises its expertise as 'union awareness and avoidance.''

Therefore, yes, it is accurate to define them as a 'union-busting firm'. They fulfill other roles than just that, but they do act as union-busters, and they even describe themselves as such.

ggwicz(10000) 5 days ago [-]

Grow up. WilmerHale is massive, and yes, they've done—and do—many different things.

But their track record of labor-related work is almost entirely _against_ workers. You can like or dislike their positions all you want, but 'notorious' and 'union-busting' are both fair adjectives to describe them.

They infamously defended firms in Germany who'd profited off forced labor during the Holocaust. They are currently working with Amazon against the labor-organizing efforts there. And now they're doing that with Activision/Blizzard.

Have you got an actual argument, like the number of pro-labor positions they've argued in their long history?

djanogo(10000) 5 days ago [-]

'respectable publication like the Washington Post, WSJ, NYT, Economist, etc' I am not sure if you are being /s.

Historical Discussions: Crafting Interpreters is available in print (July 29, 2021: 608 points)
Crafting Interpreters: Closures (September 27, 2019: 269 points)
Crafting Interpreters: Superclasses (March 18, 2020: 253 points)
Garbage Collection (November 30, 2019: 230 points)
Crafting Interpreters: Classes and Instances (December 31, 2019: 224 points)
Crafting Interpreters Is Complete (April 06, 2020: 199 points)
Crafting Interpreters (July 10, 2018: 187 points)
Jumping Back and Forth (May 20, 2019: 145 points)
Implementing Methods and Initializers (February 20, 2020: 41 points)
Crafting Interpreters Is Complete (April 05, 2020: 9 points)
Optimization · Crafting Interpreters (April 05, 2020: 9 points)
Crafting Interpreters: Functions (April 03, 2019: 5 points)
Crafting Interpreters (October 27, 2020: 3 points)
A handbook for making programming languages (September 13, 2020: 2 points)
Crafting Interpreters (October 03, 2019: 2 points)
Crafting Interpreters: Global Variables (January 30, 2019: 2 points)
Crafting Interpreters: Global Variables (January 29, 2019: 2 points)
Crafting Interpreters: A Bytecode Virtual Machine (March 01, 2018: 1 points)
Show HN: Crafting Interpreters – A handbook for making programming languages (January 15, 2017: 423 points)
Show HN: How to write a recursive descent parser (March 20, 2017: 347 points)

(608) Crafting Interpreters is available in print

608 points 5 days ago by azhenley in 10000th position

craftinginterpreters.com | | comments | anchor

Crafting Interpreters contains everything you need to implement a full-featured, efficient scripting language. You'll learn both high-level concepts around parsing and semantics and gritty details like bytecode representation and garbage collection. Your brain will light up with new ideas, and your hands will get dirty and calloused. It's a blast.

Starting from main(), you build a language that features rich syntax, dynamic typing, garbage collection, lexical scope, first-class functions, closures, classes, and inheritance. All packed into a few thousand lines of clean, fast code that you thoroughly understand because you write each one yourself.

The book is available in four delectable formats:

All Comments: [-] | anchor

knuthsat(10000) 5 days ago [-]

I just finished the first part of the book (building the intepreter, but I did it in typed Python).

The book sent me back to the college days where I remember doing similar things.

Can't wait to start working on the bytecode virtual machine.

It's really an excellent book.

xcubic(10000) 5 days ago [-]

How difficult was it to make this in a different language than the one used on the book?

vmchale(10000) 5 days ago [-]


A) Being implemented in Java


B) Not having anything satisfying on types

I can't get excited about it. I know it's an uncommon opinion but there's so much missing it can't be a monograph/treatise.

chrisseaton(10000) 5 days ago [-]

What's wrong with Java for a compiler implementation language? One of the most powerful dynamic compilers in the world is in Java. And its object orientated design works well for compilers.

tlhunter(10000) 5 days ago [-]

The PDF sample I opened suggests it's implemented in C. The GitHub suggests it uses both C and Java: https://github.com/confucianzuoyuan/craftinginterpreters

munificent(10000) 5 days ago [-]

Author here! It feels amazing to have this done and live. I'm happy to talk about whatever you might want to know about it. If you're curious how the sausage was made, I wrote a blog post about taking the web book and bringing it to print and ebook here:


syntaxfree(10000) 5 days ago [-]

It's good work, congratulations.

krylon(10000) 5 days ago [-]

Thank you so much for all the work you put into this!

steveklabnik(10000) 5 days ago [-]

Congrats! Can't wait to get my hands on a copy.

acj(10000) 5 days ago [-]

Congrats, Bob, and thanks for all of your time and effort. I enjoyed the illustration videos that you included in the earlier blog post [1] (under 'Illustrating by Hand'). That post landed at a time when many of us were badly stressed, and I remember it being very soothing.

[1] https://journal.stuffwithstuff.com/2020/04/05/crafting-craft...

mtlynch(10000) 5 days ago [-]

Congrats on the print version, Bob! I appreciate you taking the time to share everything you've learned about the writing and publishing process.

I look forward to reading this!

tylerscott(10000) 5 days ago [-]

Just another congrats! I have been waiting for this day. Thanks for all your work!

intrepidhero(10000) 5 days ago [-]

Thanks for writing about your process! I enjoyed it almost as much as Crafting Interpreters, which is to say, quite a lot.

And you absolutely should be proud of your PDF->PNG->Highlight diffs script. As someone who has kept excel files and PDFs in version control I knew exactly the feeling you were talking about. I got to that part and exclaimed out loud, 'Damn cool!'

ruuda(10000) 5 days ago [-]

This was such an amazing read. Getting nervous about not being able to diff things is so relatable!

rednab(10000) 5 days ago [-]

That's going to look awesome next to my copy of Game Programming Patterns! ... What do you mean 'Temporarily Out Of Stock', amazon.co.uk!?

rgrmrts(10000) 5 days ago [-]

Woohoo, congrats Bob! I'll be picking up a copy (is there a difference in your cut for Amazon vs Barnes & Noble?) and am excited to work through it. I've been referencing the web version here and there but waited on working through it entirely til I had a physical copy :D

lawn(10000) 5 days ago [-]

I really loved Game Programming Patterns and I'll definitely need to check this out too.

The way you made your book(s), from making them available online for free to the excellent layout of the printed version, was a huge inspiration for me to write my own book. Thank you.

cxr(10000) 5 days ago [-]

> I decided to rewrite the whole build system for the book in Dart. The build script I wrote for my first book was dead simple. Literally a single Python script that took a Markdown file for each book chapter and rendered it to HTML while weaving in the code snippets. The world's dumbest static site generator. [...] Useful, but really straining the limits of how much code I want to maintain in a dynamically typed language like Python [...] it was, frankly, really slow

This is great to hear. When I read about your build system in Crafting 'Crafting Interpreters' <https://journal.stuffwithstuff.com/2020/04/05/crafting-craft...>, I had hopes that your next bugbite would be fleshing out half of a quasi-literate programming tool. For a compiler writer, who also happens to be writing a book, the circumstances and fit seem just too natural to avoid it.

I'm looking forward to diving into your Markdown package, too, and expect that it will be easy to port to JS. Nowadays, whenever I need a library for a JS project, my first choice is to check for a clean Dart, Haxe, or Java implementation that does what I want, with the intent of porting it, rather than disadvantaging myself by relying on whatever the NodeJS community has cobbled together and put on NPM. <https://news.ycombinator.com/item?id=24495646>

EDIT: There is a typo 'CCS' in the blog post. Highlighted: <https://hyp.is/d2cCEPCLEeu7jSft-ex2Ug/journal.stuffwithstuff...>

lemonade5117(10000) 4 days ago [-]

hey! I'd been thinking about this just a few days ago! It's great that the print version is finally out. Congratulations on publishing your book!

macintux(10000) 5 days ago [-]

Ordered, thanks. Regrettably I have read fewer than 1% of the books I've ever bought, but this one has a much better chance than most.

Congratulations on finally putting this project to rest.

matheusmoreira(10000) 5 days ago [-]

Congratulations. Really enjoy your blog as well, I think I've read every post about programming languages. I've cited them a lot too.

nirvdrum(10000) 5 days ago [-]

I've been waiting for the print version before diving in, so I'm excited to see it's now available. For what it's worth, I'd be willing to spend more money on an all-format package. I need to hold a book in my hand and not stare at an illuminated screen to absorb the material, but sometimes carry a big book around is not practical so having a Kindle copy is handy. A PDF copy is good for referencing stuff I've already read. Since the book is printed on-demand, such a package might not really be practical, but I thought I'd mention it.

Congrats on getting over the finish line, by the way. It's a huge accomplishment.

jessewmc(10000) 5 days ago [-]

Congrats! Ordered. Been waiting for a physical copy, being able to scribble notes in the margins and bookmark and flip back and forth by hand just works so much better for me.

jkcxn(10000) 5 days ago [-]

Incredible piece of work, and an inspirational process. Your book helped me write my own language compiler. I love the way you explain the pratt parsing technique and also the way the lexer and parser work together so you don't have to read the whole file at once, and how advance(), consume() and expect() functions work. It all just works together beautifully

gurleen_s(10000) 5 days ago [-]

Congratulations on releasing the book! Grateful for your work both as a student of computer science and as a Flutter/Dart developer. I'm planning on buying a copy after work. Are there any resources you recommend as a follow up to this book, and do you have any new content planned?

poidos(10000) 5 days ago [-]

I am in absolute awe at the PDF diff tool. Wonderful, wonderful stuff.

titzer(10000) 5 days ago [-]

I really like the hand-drawn diagrams in the sample PDF! Are the actually hand-drawn or stylized SVG?

humanlion87(10000) 5 days ago [-]

Congrats! Will be getting a copy just because I enjoyed reading the online version. I love your writing style!

zerr(10000) 5 days ago [-]

Can you please point out the differences compared to other books for the same topic? Thanks!

marcocampos(10000) 5 days ago [-]

Congrats man, I love the book and the final physical/PDF book layout/design is amazing!

m0meni(10000) 5 days ago [-]

Congrats! Been following you for a while and just finished the blog post...what a huge effort and meticulous attention to detail. I can't wait to buy the book so I can continue to procrastinate reading the full thing in print instead of on my computer monitor

mwigdahl(10000) 5 days ago [-]

Your book was great; I learned a ton going through it. Congratulations on getting it to print!

ceronman(10000) 5 days ago [-]

Hi Bob! Congratulations on shipping the book!

This book has been a life saver during the pandemic, not only the world situation but I personally was a bit disappointed with my career. I was inspired by one of your posts where you mention that you were writing every single day for four years. I decided to use the same approach and I started reading the book and coding a bit every single day. I was able to finish it completely and I wrote two implementations of Lox. I just ordered a paper copy, from the PDF sample it seems it looks gorgeous!

Thank you very much for such a great contribution to the world!

manaskarekar(10000) 5 days ago [-]

Thank you so much for everything you have put out. The execution is always beautiful, the content is solid, licensing/pricing model is great, and the meta info about the layout/setup/journey is equally enjoyable.

As the saying goes, your books spark joy. Thank you.

OT: Do I need to purchase the pdf separately if I purchase the print book?

cjcampbell(10000) 5 days ago [-]

As others have said, congratulations. I've been following your progress for a good part of this journey, and I'm excited that you're finally able to hold a finished product in your hands! And now I'm stuck with a dilemma. I don't _need_ a physical copy, but I really do _want_ one.

timClicks(10000) 5 days ago [-]

Congratulations, this is such a tremendous achievement. Well done. And thank you for everything that you've done to make language hacking accessible (and Wren is superb, by the way).

rvbissell(10000) 3 days ago [-]

Just a note to say that your writing is excellent!

After seeing this HN post yesterday, I bought your /Crafting Interpreters/ book to see if it could help me evolve my AST-walker into a threaded-interpreter (using GNU's computed goto-label extension.) I find your writing style to be just the right mix of humor and content to keep me engaged.

mwcampbell(10000) 5 days ago [-]

Congratulations on completing this long journey. I just bought a copy, mostly to reward your effort, but also to satisfy my curiosity on an aspect of how the PDF sausage was made. I notice that the PDF version of the book doesn't include the semantic tags required for accessibility. Now, please don't take this as a criticism; I know that people (like me) who use a screen reader or other accessibility tool can go with the EPUB or online version. But, I wonder, did you consciously decide to turn off tagged PDF, or were you just going with the default in InDesign? Thanks!

skybrian(10000) 5 days ago [-]

It would be cool to see the Dart script that you wrote to build the book's website. I know it's not intended for anything else, but maybe there are some good ideas in there for other websites?

Edit: I guess that's here: https://github.com/munificent/craftinginterpreters/blob/mast...

kizer(10000) 5 days ago [-]

Great job, man. I've consulted the online version a number of times over the years. Will be purchasing this in dead tree form :D.

kelnos(10000) 5 days ago [-]

Thank you so much for all your work on this! I ran through the first part (interpreter in Java, which I ended up doing in Scala for fun) a few years ago, and so much about this sort of thing was demystified for me. I used to think building parsers, tokenizers, AST builders, and interpreters was some sort of unapproachable black magic, but now I realize it's not actually that difficult.

I started working on the second part (decided I'd do this one in Rust instead of C), but got distracted by other things and never got around to picking it up again. I look forward to getting back to it!

Thanks again, and congratulations on getting to this publishing stage!

tyingq(10000) 5 days ago [-]

Somewhat related, I recently read about the undocumented tokenizer that comes with the python re module. https://www.reddit.com/r/ProgrammerTIL/comments/4tpt03/pytho...

yatac42(10000) 5 days ago [-]

Word of warning: The Scanner class does not adhere to the maximal munch rule, so you shouldn't use it to match keywords the way that it's done in the code in the linked reddit thread. If you replace the word 'foobar' in the input with 'truebar', it will then be tokenized as the keyword true followed by the identifier 'bar', which is obviously not what you want.

criddell(10000) 5 days ago [-]

Is the Kindle version of your book protected with DRM?

munificent(10000) 5 days ago [-]

No, there is no DRM on any of the electronic formats—Kindle, EPUB, or PDF. If you buy it, it's yours. (But please don't upload it to any ebook-sharing sites.)

kurinikku(10000) 5 days ago [-]

I love Crafting Interpreters, excited to see it in print! I might just buy a copy to support the author, the illustrations do make it coffee table-worthy.

qorrect(10000) 5 days ago [-]

I'm putting it on my Christmas list as I'm broke right now, definitely a great display book!

recursivedoubts(10000) 5 days ago [-]

I used the website for my compilers class this last semester and it was great.

Recursive descent parsing and a hand-rolled lexer teaches grammars and general language design way better than other alternatives.

ufo(10000) 5 days ago [-]

Sometimes I with that intro to compilers books placed a bigger emphasis on recursive descent. Recursive descent works anywhere, without dependencies to external tools such as Bison or ANTLR. I hope that this book marks the start of a trend.

Sure, everyone should learn about LR parsers at some point. But I'd argue that outside a compiler class, knowing recursive descent is likely to be useful more often than knowing LR parsing.

Historical Discussions: Postgres Full-Text Search: A search engine in a database (July 27, 2021: 596 points)

(598) Postgres Full-Text Search: A search engine in a database

598 points 7 days ago by twakefield in 10000th position

blog.crunchydata.com | Estimated reading time – 14 minutes | comments | anchor

Early in on my SQL journey, I thought that searching for a piece of text in the database mostly involved querying like this:

SELECT col FROM table WHERE col LIKE '%some_value%';

Then I would throw in some wildcard operators or regular expressions if I wanted to get more specific.

Later on, I worked with a client who wanted search functionality in an app, so "LIKE" and regex weren't going to cut it. What I had known all along was just pattern matching. It works perfectly fine for certain purposes, but what happens when it's not just a matter of checking for a straightforward pattern in a single text field?

For example, what if you wanted to search across multiple fields? How about returning possible matches even if the search term happens to be misspelled? Also, what if you have very large amounts of data to search on? Sure, you can create indexes for columns on which you want to query for pattern matches, but that will have limitations (for instance, the B-tree index doesn't work for col LIKE '%substring%').

So when we say PostgreSQL is the 'batteries included database,' this is just one reason why. With Postgres, you don't need to immediately look farther than your own database management system for a full-text search solution. If you haven't yet given Postgres' built-in full-text search a try, read on for a simple intro.

Postgres Full-Text Search Basics for the Uninitiated

Core Postgres includes the following full-text search capabilities. To name a few:

  • Ignore stop words (common words such as 'the' or 'an').
  • Stemming, where search matches can be based on a 'root' form, or stem, of a word ("run" matches "runs" and "running" and even "ran").
  • Weight and rank search matches (so best matches can be sorted to the top of a result list).

Before we go further, let's also get ourselves familiarized with the following concepts:

  1. A document is a set of data on which you want to carry out your full-text search. In Postgres, this could be built from a single column, or a combination of columns, even from multiple tables.
  2. The document is parsed into tokens, which are small fragments (e.g. words, phrases, etc) from the document's text. Tokens are then converted to more meaningful units of text called lexemes.
  3. In Postgres, this conversion is done with dictionaries -- there are built-in ones, but custom dictionaries can be created if necessary. These dictionaries help determine stop words that should get ignored, and whether differently-derived words have the same stem. Most dictionaries are for a specific language (English, German, etc) but you could also have ones that are for a specific domain.
  4. The sorted list of lexemes from the document is stored in the tsvector data type.

Example: Searching Storm Event Details

I have a table that contains storm events data gathered by the U.S. National Weather Service. For simplicity's sake I won't include all possible fields in the statement below, but there's a copy of the data and some further information available in this repository.

CREATE TABLE se_details (
    episode_id int,
    event_id int primary key,
    state text,
    event_type text,
    begin_date_time timestamp,
    episode_narrative text,
    event_narrative text,

Let's also say that we want to carry out a full-text search on the data on the event_narrative column. We could add a new column to the table to store the preprocessed search document (i.e. the list of lexemes):

ALTER TABLE se_details ADD COLUMN ts tsvector 
    GENERATED ALWAYS AS (to_tsvector('english', event_narrative)) STORED;

ts is a generated column (new as of Postgres 12), and it's automatically synced with the source data.

We can then create a GIN index on ts:

CREATE INDEX ts_idx ON se_details USING GIN (ts);

And then we can query like so:

SELECT state, begin_date_time, event_type, event_narrative
FROM se_details
WHERE ts @@ to_tsquery('english', 'tornado');

tsquery is the other full-text search data type in Postgres. It represents search terms that have also been processed as lexemes, so we'll pass in our input term to the to_tsquery function in order to optimize our query for full-text search. (@@ is a match operator.)

What we get with this query are records where 'tornado' is somewhere in the text string, but in addition to that, here are a couple of records in the result set where there are also matches for 'tornado' as lexeme ('tornado-like' and 'tornadoes'):

state           | KENTUCKY  begin_date_time | 2018-04-03 18:08:00  event_type      | Thunderstorm Wind  event_narrative | A 1.5 mile wide swath of winds gusting to around 95 mph created tornado-like damage along Kentucky Highway 259 in Edmons  on County. The winds, extending 3/4 of a mile north and south of Bee Spring, destroyed or heavily damaged several small outbuildings, tore  part of the roof off of one home, uprooted and snapped the trunks of numerous trees, and snapped around a dozen power poles. Several othe  r homes sustained roof damage, and wind-driven hail shredded vinyl siding on a number of buildings.
state           | WISCONSIN  begin_date_time | 2018-08-28 15:30:00  event_type      | Thunderstorm Wind  event_narrative | A swath of widespread tree and crop damage across the southern portion of the county. Sections of trees and crops compl  etely flattened, and some structural damage from fallen trees or due to the strong downburst winds. Various roads closed due to fallen tre  es. Two semi-trucks were overturned on highway 57 in Waldo. The widespread wind damage and tornadoes caused structural damage to many home  s with 70 homes sustaining affected damage, 3 homes with minor damage, 2 homes with major damage, one home destroyed, and 2 businesses wit  h minor damage.

Searching for Phrases

One way to handle phrases as search terms is to use the & (AND) or <-> (FOLLOWED BY) Boolean operators with the tsquery.

For example, if we want to search for the phrase 'rain of debris':

SELECT state, begin_date_time, event_type, event_narrative
FROM se_details
WHERE ts @@ to_tsquery('english', 'rain & of & debris');

The search phrase gets normalized to 'rain' & 'debri'. The order doesn't matter as long as both 'rain' and 'debri' have matches in the document, such as this example:

A debris flow caused by heavy rain on a saturated hillside blocked the Omak River Road one mile south of the intersection with State Route 97.

If we do to_tsquery('english', 'rain <-> of <-> debris') the tsquery value is 'rain' <2> 'debri', meaning it will only match where 'rain' is followed by 'debri' precisely two positions away, such as here:

Heavy rain caused debris flows on the Coal Hollow Fire and Tank Hollow Fire burn scars.

(This was actually the only match, so using the <-> operator is a little bit more restrictive.)

The phraseto_tsquery function can also parse the phrase itself, and inserts <N> between lexemes where N is the integer position of the next lexeme when counting from the preceding one. This function doesn't recognize operators unlike to_tsquery; for example, we can just pass in the entire phrase like so:

phraseto_tsquery('english', 'rain of debris')

The tsquery value is 'rain' <2> 'debri' like above, so phraseto_tsquery also accounts for positioning.

Functions for Weighting and Ranking Search Results

One very common use case for assigning different weights and ranking is searching on articles. For example, you may want to merge the article title and abstract or content together for search, but want matches on title to be considered more relevant and thus rank higher.

Going back to our storm events example, our data table also has an episode_narrative column in addition to event_narrative. For storm data, an event is an individual type of storm event (e.g. flood, hail), while an episode is an entire storm system and could contain many different types of events.

Let's say we want to be able to carry out a full-text search on event as well as episode narratives, but have decided that the event narrative should weigh more than the episode narratives. We could define the ts column like this instead:

ALTER TABLE se_details ADD COLUMN ts tsvector 
        (setweight(to_tsvector('english', coalesce(event_narrative, '')), 'A') || 
         setweight(to_tsvector('english', coalesce(episode_narrative, '')), 'B')) STORED;

setweight is a full-text function that assigns a weight to the components of a document. The function takes the characters 'A', 'B', 'C', or 'D' (most weight to least, in that order). We're also using a coalesce here so that the concatenation doesn't result in nulls if either episode_narrative or event_narrative contain null values.

You could then use the ts_rank function in an ORDER BY clause to return results from most relevant to less.

ORDER BY ts_rank(ts, to_tsquery('english', 'tornado')) DESC;

So, this record is ranked higher in the search results:

state             | MISSISSIPPI  begin_date_time   | 2018-04-06 22:18:00  event_type        | Tornado  event_narrative   | This tornado touched down near the Jefferson Davis-Covington County line along Lucas Hollow Road. It continued southeast, crossing the   county line. Some large limbs and trees were snapped and uprooted at this location. It then crossed Lucas Hollow Road again before crossing Leonard Road.   A tornado debris signature was indicated on radar in these locations. The tornado uprooted and snapped many trees in this region. It also overturned a sm  all tractor trailer on Oakvale Road and caused some minor shingle damage to a home. After crossing Oakvale Road twice, the tornado lifted before crossing   Highway 35. The maximum winds in this tornado was 105mph and total path length was 2.91 miles. The maximum path width was 440 yards.  episode_narrative | A warm front was stretched across the region on April 6th. As a disturbance rode along this stalled front, it brought copious amounts   of rain to the region thanks to ample moisture in place. As daytime heating occurred, some storms developed which brought severe weather to the region.

Compared to this, where there is a match for 'tornado' in episode_narrative but not event_narrative:

state             | NEBRASKA  begin_date_time   | 2018-06-06 18:10:00  event_type        | Hail  event_narrative   | Hail predominately penny size with some quarter size hail mixed in.  episode_narrative | Severe storms developed in the Nebraska Panhandle during the early evening hours of Jun  e 6th. As this activity tracked east, a broken line of strong to severe thunderstorms developed. Hail up to   the size of ping pong balls, thunderstorm wind gusts to 70 MPH and a brief tornado touchdown were reported  . Heavy rain also fell leading to flash flooding in western Keith county.

Tip: ts_rank returns a floating-point value, so you could include the expression in your SELECT to see how these matches score. In my case I get around a 0.890 for the Mississippi event, and 0.243 for the Nebraska event.

Yes, You Can Keep Full-Text Search in Postgres

You can get even deeper and make your Postgres full-text search even more robust, by implementing features such as highlighting results, or writing your own custom dictionaries or functions. You could also look into enabling extensions such as unaccent (remove diacritic signs from lexemes) or pg_trgm (for fuzzy search). Speaking of extensions, those were just two of the extensions supported in Crunchy Bridge. We've built our managed cloud Postgres service such that you can dive right in and take advantage of all these Postgres features.

With all that said: as you can see, you don't need a very involved setup to get started. It's a good idea to try out whether you are just beginning to explore a full-text search solution, or even just reevaluating whether you need to go all out for a dedicated full-text search service, especially if you already have Postgres in your stack.

To be fair, Postgres doesn't have some search features that are available with platforms such as Elasticsearch. But a major advantage is that you won't have to maintain and sync a separate data store. If you don't quite need search at super scale, there might be more for you to gain by minimizing dependencies. Plus, the Postgres query syntax that you already know with the addition of some new functions and operators, can get you pretty far. Got any other questions or thoughts about full-text search with Postgres? We're happy to hear them on @crunchydata.

All Comments: [-] | anchor

lettergram(10000) 7 days ago [-]

I actually built a search engine back in 2018 using postgresql


Worked quite well and still use it daily. Basically doing weighted searches on vectors is slower than my approach, but definitely good enough.

Currently, I can search around 50m HN & Reddit comments in 200ms on the postgresql running on my machine.

vincnetas(10000) 7 days ago [-]

Offtopic, but currious what are your use cases when searching all HN and reddit comments? Im at the beggining of this path, just crawled HN, but what to do with this, still a bit cloudy.

rattray(10000) 7 days ago [-]

Nice – looks like the ~same approach recommended here of adding a generated `tsvector` column with a GIN index and querying it with `col @@ @@ to_tsquery('english', query)`.

theandrewbailey(10000) 7 days ago [-]

> You could also look into enabling extensions such as unaccent (remove diacritic signs from lexemes) or pg_trgm (for fuzzy search).

Trigrams (pg_trgm) are practically needed for usable search when it comes to misspellings and compound words (e.g. a search for 'down loads' won't return 'downloads').

I also recommend using websearch_to_tsquery instead of using the cryptic syntax of to_tsquery.

kyrra(10000) 7 days ago [-]

Trigrams are amazing. I was doing a sideproject where I wanted to allow for substring searching, and trigrams seemed to be the only way to do it (easily/well) in postgres. Gitlab did a great writeup on this a few years ago that really helped me understand it:


You can also always read the official docs:


jcuenod(10000) 7 days ago [-]

Huh, just yesterday I blogged[0] about using FTS in SQLite[1] to search my PDF database. SQLite's full-text search is really excellent. The thing that tripped me up for a while was `GROUP BY` with the `snippet`/`highlight` function but that's the point of the blog post.

[0] https://jcuenod.github.io/bibletech/2021/07/26/full-text-sea...

[1] https://www.sqlite.org/fts5.html

bob1029(10000) 6 days ago [-]

We've been using SQLite's FTS capabilities to index customer log files since 2017 or so. It's been a wonderful approach for us. Even if we move to our own in-house data store (event sourced log), we would still continue using SQLite for tracing because it brings so many of these sorts of benefits.

edwinyzh(10000) 6 days ago [-]

A very well written article about SQLite FTS5! One question - it seems that your search result displays the matching paging number, how did you do that? because as far as I know, unlike FTS4, FTS5 has no `offsets` function.

MushyRoom(10000) 7 days ago [-]

I was hyped when I found out about it a while ago. Then I wasn't anymore.

When you have 12 locales (kr/ru/cn/jp/..) it's not that fun anymore. Especially on a one man project :)

freewizard(10000) 7 days ago [-]

For small project and simple full text search requirement, try this generic parser: https://github.com/freewizard/pg_cjk_parser

oauea(10000) 7 days ago [-]

Why support so many locales in a one man project?

tabbott(10000) 7 days ago [-]

Zulip's search is powered by this built-in Postgres full-text search feature, and it's been a fantastic experience. There's a few things I love about it:

* One can cheaply compose full-text search with other search operators by just doing normal joins on database indexes, which means we can cheaply and performantly support tons of useful operators (https://zulip.com/help/search-for-messages).

* We don't have to build a pipeline to synchronize data between the real database and the search database. Being a chat product, a lot of the things users search for are things that changed recently; so lag, races, and inconsistencies are important to avoid. With the Postgres full-text search, all one needs to do is commit database transactions as usual, and we know that all future searches will return correct results.

* We don't have to operate, manage, and scale a separate service just to support search. And neither do the thousands of self-hosted Zulip installations.

Responding to the 'Scaling bottleneck' concerns in comments below, one can send search traffic (which is fundamentally read-only) to a replica, with much less complexity than a dedicated search service.

Doing fancy scoring pipelines is a good reason to use a specialized search service over the Postgres feature.

I should also mention that a weakness of Postgres full-text search is that it only supports doing stemming for one language. The excellent PGroonga extension (https://pgroonga.github.io/) supports search in all languages; it's a huge improvement especially for character-based languages like Japanese. We're planning to migrate Zulip to using it by default; right now it's available as an option.

More details are available here: https://zulip.readthedocs.io/en/latest/subsystems/full-text-...

stavros(10000) 7 days ago [-]

I cannot tell you how much I love Zulip, but I can tell you that I have no friends any more because everyone is tired of me evangelizing it.

brightball(10000) 7 days ago [-]

All of this. It's such a good operational experience that I will actively fight against the introduction of a dedicated search tool unless it's absolutely necessary.

rattray(10000) 7 days ago [-]

TBH I hadn't known you could do weighted ranking with Postgres search before.

Curious there's no mention of zombodb[0] though, which gives you the full power of elasticsearch from within postgres (with consistency no, less!). You have to be willing to tolerate slow writes, of course, so using postgres' built-in search functionality still makes sense for a lot of cases.

[0] https://github.com/zombodb/zombodb

craigkerstiens(10000) 7 days ago [-]

Zombo is definitely super interesting and we should probably add a bit in the post about it. Part of the goal here was that you can do a LOT with Postgres, without adding one more system to maintain. Zombo is great if you have Elastic around, but want Postgres as the primary interface, but what if you don't want to maintain Elastic.

My ideal is always though to start with Postgres, and then see if it can solve my problem. I would never Postgres is the best at everything it can do, but for most things it is good enough without having another system to maintain and wear a pager for.

syoc(10000) 7 days ago [-]

My worst search experiences always come from the features applauded here. Word stemming and removing stop words is a big hurdle when you know what you are looking for but get flooded by noise because some part of the search string was ignored. Another issue is having to type out a full word before you get a hit in dynamic search boxes (looking at you Confluence).

Someone1234(10000) 7 days ago [-]

I'd argue that isn't a problem with the feature, but a thoughtless implementation.

A good implementation will weigh verbatim results highest before considering the stop-word stripped or stemmed version. Configuring to_tsvector() to not strip stop words or using a stemming dictionary is, in my opinion, a little clunky in Postgres: You'll want to make a new [language] dictionary and then call to_tsvector() using your new dictionary as the first parameter.

After you've set up the dictionary globally, this would look something like:

setweight(to_tsvector('english_no_stem_stop', col), 'A') || setweight(to_tsvector('english', col), 'B'))

I think blaming Postgres for adding stemming/stop-word support because it can be [ab]used for a poor search user experience is like blaming a hammer for a poorly built home. It is just a tool, it can be used for good or evil.

PS - You can do a verbatim search without using to_tsvector(), but that cannot be easily passed into setweight() and you cannot use features like ts_rank().

shakascchen(10000) 7 days ago [-]

No fun doing it for Chinese, especially for traditional Chinese.

I had to install software but on Cloud SQL you can't. You have to do it on your instances.

justusw(10000) 6 days ago [-]

Same story for performing searches in Japanese.

rattray(10000) 7 days ago [-]

Something that's missing from this which I'm curious about is how far can't postgres search take you?

That is, what tends to be the 'killer feature' that makes teams groan and set up Elasticsearch because you just can't do it in Postgres and your business needs it?

Having dealt with ES, I'd really like to avoid the operational burden if possible, but I wouldn't want to choose an intermediary solution without being able to say, 'keep in mind we'll need to budget a 3-mo transition to ES once we need X, Y, or Z'.

nostrademons(10000) 7 days ago [-]

Used to work on Google Search, used ES extensively for a startup I founded (which was sort of quasi-search...it was built around feed ranking, where the query is constant and a stream of documents is constantly coming in), and have also used Postgres extensively in other companies.

The big problem with all the off-the-shelf search solutions (RDBMS full-text search, ES, Algolia) is that search ranking is a complicated and subtle problem, and frequently depends on signals that are not in the document itself. Google's big insight is that how other people talk about a website is more important than how the website talks about itself, and its ranking algorithm weights accordingly.

ES has the basic building blocks to construct such a ranking algorithm. In terms of fundamental infrastructure I found ES to be just as good as Google, and better in some ways. But its out-of-the-box ranking function sucks. Expect to put a domain expert just on search ranking and evaluation to get decent results, and they're going to have to delve pretty deeply into advanced features of ES to get there.

AFAICT Postgres search only lets you tweak the ranking algorithm by assigning different weights to fields, assuming that the final document score is a linear combination of individual fields. This is usually not what you want - it's pretty common to have non-linear terms from different signals.

sandGorgon(10000) 7 days ago [-]

it doesnt do TF-IDF or BM-25 - the current state of art in search relevance algorithms.

that's where it cant be used for anything serious.

Thaxll(10000) 7 days ago [-]

PG is average at best for text search, it's not even good.

nextaccountic(10000) 7 days ago [-]

If your search needs outgrow Postgres' native search engine, you can use Postgres search with an ElasticSearch backend, using Zombo


It basically gives you a new kind of index (create index .. using zombodb(..) ..)

mjewkes(10000) 7 days ago [-]

Exact phrase matching. This generally requires falling back to ILIKE, which is not performant.

amichal(10000) 7 days ago [-]

Postgres was not good at (for us) - IDF and other corpus based relevancy measures. had to hand roll - thesaurus and missspelling - again possible of course with preprocessing and by adding config files - non Latin alphabet languages. E.g. Arabic - needed filesystem access (we used aws rds so couldn't do it) to add a dictionary based stemmed/word breaker

We used es or solr for those cases. For English FTS with 100k documents doing it in PG is super easy and one less dependency

some_developer(10000) 7 days ago [-]

Anectodal note:

A few years ago we added yet-another part to our product and, whilst ES worked 'okay', we got a bit weary of ES due to 'some issues' (some bug in the architecture keeping things not perfect in sync, certain queries with 'joins' of types taking long, demand on HW due to the size of database, no proper multi-node setup due to $$$ and time constraint, etc.; small things piling up over time).

Bright idea: let's see how far Postgres, which is our primary datastore, can take us!

Unfortunately, the feature never made it fully into production.

We thought that on paper, the basic requirements were ideal:

- although the table has multiple hundreds of millions of entries, natural segmentation by customer IDs made possible individual results much smaller

- no weighted search result needed: datetime based is perfect enough for this use-case, we thought it would be easy to come up with the 'perfect index [tm]'

Alas, we didn't even get that far:

- we identified ('only') 2 columns necessary for the search => 'yay, easy'

- one of those columns was multi-language; though we didn't have specific requirements and did not have to deal with language specific behaviour in ES, we had to decide on one for the TS vectorization (details elude me why 'simple' wasn't appropriate for this one column; it was certainly for the other one)

- unsure which one, or both, we would need, for one of the columns we created both indices (difference being the 'language')

- we started out with a GIN index (see https://www.postgresql.org/docs/9.6/textsearch-indexes.html )

- creating a single index took > 15 hours

But once the second index was done, and had not even rolled out the feature in the app itself (which at this point was still an ever changing MVP), unrelated we suddenly got hit by lot of customer complains that totally different operations on this table (INSERTs and UPDATEs) started to be getting slow (like 5-15 seconds slow, something which usually takes tiny ms).

Backend developer eyes were wide open O_O

But since we knew that second index just finished, after checking the Posgres logs we decided to drop the FTS indices and, lo' and behold, 'performance problem solved'.

Communication lines were very short back then (still are today, actually) and it was promptly decided we just cut the search functionality from this new part of the product and be done with it. This also solved the problem, basically (guess there's some 'business lesson' to be learned here too, not just technical ones).

Since no one within the company counter argued this decision, we did not spend more time analyzing the details of the performance issue though I would have loved to dig into this and get an expert on board to dissect this.


A year later or so I had a bit free time and analyzed one annoying recurring slow UPDATE query problem on a completely different table, but also involving FTS on a single column there also using a GIN index. That's when I stumble over https://www.postgresql.org/docs/9.6/gin-implementation.html

> Updating a GIN index tends to be slow because of the intrinsic nature of inverted indexes: inserting or updating one heap row can cause many inserts into the index (one for each key extracted from the indexed item). As of PostgreSQL 8.4, GIN is capable of postponing much of this work by inserting new tuples into a temporary, unsorted list of pending entries. When the table is vacuumed or autoanalyzed, or when gin_clean_pending_list function is called, or if the pending list becomes larger than gin_pending_list_limit, the entries are moved to the main GIN data structure using the same bulk insert techniques used during initial index creation. This greatly improves GIN index update speed, even counting the additional vacuum overhead. Moreover the overhead work can be done by a background process instead of in foreground query processing.

In this particular case I was able to solve the occasional slow UPDATE queries with 'FASTUPDATE=OFF' on that table and, thinking back about the other issue, it might have solved or minimized the impact.

Back to the original story: yep, this one table can have 'peaks' of inserts but it's far from 'facebook scale' or whatever, basically 1.5k inserts / second were the absolute rare peak I measured and usually it's in the <500 area. But I guess it was enough for this scenario to add latency within the database.


Turning back my memory further, I was always 'pro' trying to minimize / get rid of ES after learning about http://rachbelaid.com/postgres-full-text-search-is-good-enou... even before we used any FTS feature. At also mentions the GIN/GiST issue but alas, in our case: ElasticSearch is good enough and, besides the thwarts we've with it, actually easier to reason about (so far).

manigandham(10000) 6 days ago [-]

It can't go that far. Postgres is limited by dictionaries, stemming/processing, query semantics (like fuzzy searching), and the biggest issue of all being a lack of modern relevance algorithms. It's good for limited scenarios where you just need more than a SQL LIKE statement, and chaining some functions together can get you decent results [1] without adding another datastore.

However search tech is pretty mature with Lucene at the core and there are many better options [2] from in-process libraries to simple standalone servers to full distributed systems like Elastic. There are also other databases (relational like MemSQL, or documentstores like MongoDB/RavenDB) that are adding search as native querying functions with most of the abilities of ES. If search is a core or complex part of your application (like patterns in raw image data or similarities in audio waveforms) then that's where ES will excel.

1. https://stackoverflow.com/questions/46122175/fulltext-search...

2. https://gist.github.com/manigandham/58320ddb24fed654b57b4ba2...

grncdr(10000) 7 days ago [-]

My experience has been that sorting by relevance ranking is quite expensive. I looked into this a bit and found https://github.com/postgrespro/rum (and some earlier slide decks about it) that explains why the GIN index type can't support searching and ranking itself (meaning you need to do heap scans for ranking). This is especially problematic if your users routinely do searches that match a lot of documents and you only want to show the top X results.

Edit: if any of the Crunchy Data people are reading this: support for RUM indexes would be super cool to have in your managed service.

animeshjain(10000) 6 days ago [-]

From what I know, full text search in Postgres (and MySQL) does not have faceted search. So it only supports returning full text results from the entire index.

Actually, it is possible, but doing a search on a particular segment of rows is a very slow operation - say text search for all employees with name matching 'x', in organization id 'y'.

It is not able to utilise the index on organization id in this case, and it results in a full scan.

xcambar(10000) 7 days ago [-]

This is anecdote, not proper feedback, since I wasn't directly involved in the topic.

My company relied on PG as its search engine and everything went well from POC to production. After a few years of production and new clients requiring volumes of data an order of magnitude above our comfort zone, things went south pretty fast.

Not many months later but many sweaty weeks of engineering after, we switched to ES and we're not looking back.

tl;dr; even with great DB engineers (which we had), I'd suggest that scale is a strong limiting factor on this feature.

_009(10000) 7 days ago [-]

If you are looking to do semantic search (Cosine similarity) + filtering (SQL) on data that can be represented as vectors (audio, text, video, bio) I suggest, https://github.com/ankane/pgvector

iav(10000) 7 days ago [-]

I moved from ElasticSearch to PG FTS in production, and here are the things I had to give up:

1. PostgreSQL has a cap on column length, and the search index has to be stored in a column. The length of the column is indeterminate - it is storing every word in the document and where it's located, so a short document with very many unique words (numbers are treated as words too) can easily burst the cap. This means you have to truncate each document before indexing it, and pray that your cap is set low enough. You can use multiple columns but that slows down search and makes ranking a lot more complicated. I truncate documents at 4MB.

2. PostgreSQL supports custom dictionaries for specific languages, stemmers, and other nice tricks, but none of those are supported by AWS because the dictionary gets stored as a file on the filesystem (it's not a config setting). You can still have custom rules like whether or not numbers count as words.

gk1(10000) 7 days ago [-]

Semantic search using text embeddings. With Open Distro for Elasticsearch you can store your text embeddings and then perform a nearest-neighbor search[1] to find most similar documents using cosine similarity[2]. Elasticsearch (vanilla) will get this feature with 8.0.

If migrating to ES makes you groan you can use a managed service like Pinecone[3] (disclaimer: I work there) just for storing and searching through text embeddings in-memory through an API while keeping the rest of your data in PG.

[1] Nearest-neighbor searches in Open Distro: https://opendistro.github.io/for-elasticsearch-docs/docs/knn...

[2] More on how semantic similarity is measured: https://www.pinecone.io/learn/semantic-search/

[3] https://www.pinecone.io

matsemann(10000) 7 days ago [-]

I'm not well versed on modern pg for this use, but when I managed a Solr instance ~5 years ago, it was the ranking of the results that was the killer feature. Finding results fast most systems can do. Knowing which results to present is harder.

Our case was a domain specific knowledge base, with certain terms occurring often in many articles. Searching for a term could bring up thousands of results, but few of them were actually relevant to show in context of the search, they just happened to use the term.

jka(10000) 7 days ago [-]

'Faceted search'[1] (aka aggregates in Elasticsearch) tends to be a popular one, to provide user-facing content navigation.

That said, simonw has been on the case[2] demonstrating an implementation of that using Django and PostgreSQL.

[1] - https://en.wikipedia.org/wiki/Faceted_search

[2] - https://simonwillison.net/2017/Oct/5/django-postgresql-facet...

brightball(10000) 7 days ago [-]

Streaming data ingestion is the biggest. If you're constantly writing data to be searched, this is where ES really outshines everything.

kayodelycaon(10000) 7 days ago [-]

If I recall correctly, Postgres search doesn't scale well. Not sure where it falls apart but it isn't optimized in the same way something like Solr is.

fizx(10000) 7 days ago [-]

Hi, I started an Elasticsearch hosting company, since sold, and have built products on PG's search and SQLite FTS search.

There are in my mind two reasons to not use PG's search.

1. Elasticsearch allows you to build sophisticated linguistic and feature scoring pipelines to optimize your search quality. This is not a typical use case in PG.

2. Your primary database is usually your scaling bottleneck even without adding a relatively expensive search workload into the mix. A full-text search tends to be around as expensive as a 5% table scan of the related table. Most DBAs don't like large scan workloads.

nuker(10000) 6 days ago [-]

Is there alternative to ES that scales nicely? I'm running ELK stack for logging using AWS Elasticsearch. Logs have unpredictable traffic volume and even overprovisioned ES cluster gets clogged sometimes. I wonder is there something more scalable than ES, and have nice GUI like Kibana?

shard972(10000) 6 days ago [-]


jillesvangurp(10000) 6 days ago [-]

It's more a matter of configuring it right. I'd recommend trying out Elastic Cloud. It's a bit easier to deal with than Amazon's offering and much better supported. AWS has always been a bit hands-off on that front. Their opensearch project does not seem to break that pattern so far.

Also, with Elastic Cloud you get some access to useful features for logging (like life cycle management and data streams) that will help you scale the setup.

Kibana in recent iterations has actually improved quite a bit. The version you are getting from Amazon is probably a bit bare bones in comparison. One nice thing with Elastic is that going with the defaults gets you some useful dashboards out of the box if you use e.g. file or docker beats for collecting logs.

hnarn(10000) 6 days ago [-]

I don't know what features AWS provides but in general terms logs benefit a lot from compression, so if I were to set up this on my own I'd probably want to try something like a VDO or ZFS backed storage system as well as compressed transfers (perhaps in batch if that's required).

bityard(10000) 7 days ago [-]

I know Postgres and SQLite have mostly different purposes but FWIW, SQLite also has a surprisingly capable full-text search extension built right in: https://www.sqlite.org/fts5.html

jjice(10000) 7 days ago [-]

It's very impressive, especially considering the SQLite version you're already using probably has it enabled already. I use it for a small site I run and it works fantastic. Little finicky with deletes and updates due to virtual tables in SQLite, but definitely impressive and has its uses.

(584) Life before smartphones (2020)

584 points about 20 hours ago by evo_9 in 10000th position

mattruby.substack.com | Estimated reading time – 15 minutes | comments | anchor

The Rubesletter is a newsletter with thoughts from Matt Ruby, comedian/writer/creator of Vooza (email [email protected]).

I was not alerted

I used to get lost all the time. I'd ask for directions, look for landmarks, fold maps, carry a guidebook, and keep an atlas in the glove compartment. I never knew when the next train was coming. I waited around a lot.

I memorized phone numbers, jotted things down in notebooks, had conversations with taxi drivers, talked to random people at bars, wrote checks, went to the bank, and daydreamed. I was grossly inefficient and terribly bored. I rarely got what I wanted and, when I did, I had to wait at least 8-10 days for it to be delivered. I was not archived, nor was I searchable; things I said just disappeared forever.

I had no idea how many steps I'd walked or stairs I'd climbed. My desk's height did not adjust; I just sat in a chair and took it. I tolerated unstapled stomachs, breasts which subjugated themselves to gravity, and butts that were incapable of functioning as shelves. I had no influence and never disrupted anything. Strangers did not wish me a happy birthday or "Like" me. My personal brand was invisible.

I operated on hunches, browsed bookstores, and fearlessly entered restaurants on a whim, with no knowledge of the party of eight who'd travelled all the way from Connecticut to dine there and who, despite their reservations for 8:45pm, were not seated until 9:30pm and then had to endure a server who was extremely rude, unprofessional, and "tattooed up on his neck."

I did not eat gummy bears, worms, or any other gummy species. I never charged my weed, microdosed, or took pills to help me focus. Only doctors took my temperature and masks were for parties. My life lacked motivational quotes, nutrition tips, and workout advice. My wellness ran dry.

I did not take photos of myself, was not filtered, and had no idea what I looked like as a bunny rabbit, puppy, or unicorn. I had to buy film, load it in a camera, carry it around, find something worth shooting, get the film developed, and then pick up the prints. I only had 36 shots so each one mattered; I was constantly forced to ask myself, "Do I actually want a photo of this?" Also, my genitals went unphotographed.

Doing my best Ethan Hawke impression while riding the rails in Europe in the 90's.

There was no surveillance of the streets. Crimes occurred and there was no footage to review. Planes crashed and we only saw the wreckage. There were no body cams and only spies could install hidden cameras. I trusted the nanny. We all did. It must have been a field day for nannies.

I was rejected to my face and broken up with in person. I was not polyamorous and, truth be told, was gleeful if just one woman agreed to be in a relationship with me. In order to go on a date, I had to approach a woman, talk to her, get her number, call her, talk to her again, and ask her out. It was Kafkaesque. Once plans were made, I showed up without any further contact to check whether we were, in fact, "still on for tonight," 'running late," "at the bar," "in the back," or "here." It's a miracle we ever found each other.

News was not breaking and I was not alerted. Being elite was a good thing and being a Nazi frowned upon. Scientists were trusted and conspiracy theories were for tinfoil kooks. The only content users generated was letters to the editor.

I consumed news once a day by reading a paper that stained my hands. I stumbled upon random articles I would never have selected based on the headline. The ads I saw were untargeted shotgun blasts. Quizzes were just for students and I did not know which ice cream flavor matched my personality, who should play my BFF in a movie of my life, or which Disney prince I should have a threesome with. I rarely got to feel outraged by the words of people I'd never met. For that, I had to rely on family.

I made mixtapes and went to record stores. I put five discs in a CD changer and they were my soundtrack for months at a time. At concerts, musicians did not use computers, singers missed notes, and drummers hit skins with sticks. Things went wrong and we meekly accepted these mistakes as part of our off-key lives.

I read books with dog-eared pages, highlighted passages, and untrustworthy narrators. I'd read authors without knowing if they were allies or enemies. I lacked certitude.

The only bingeing I did involved alcohol. I'd wait an entire week to watch the next episode. I listened to whatever was on the radio, rarely watched documentaries, and knew very little about serial killers. My crime was not true and my play was not auto.

My speakers were big and my TV was small. Hardly anything was portable and my hardware was never updated. My elevator and taxi rides were devoid of television screens. I read cereal boxes while eating breakfast and shampoo bottles while sitting on the toilet. I never talked to my watch, my phone did not correct me, and acquaintances never asked me to finance their independent film or back surgery. My refrigerator and toaster were incapable of communicating with each other. The war was cold.

Me – on the right – in the early aughts in Chicago at Rainbo Club. (Did I mention photo booths were a thing then too?) That's my buddy Simon on the left.

Also, I was the default. No one called me toxic or problematic. Things weren't fluid and there was no spectrum. I assumed the police were telling the truth. I was unaware of how frequently powerful men answered the door wearing nothing but a towel. There were a lot of questions I never had to ask.

Complaining was frowned upon; I was told to walk it off. Therapy was for people with real problems and things stayed unsurfaced. I didn't think about wage gaps, redlining, gerrymandering, or the intricacies of romantic encounters. There were a lot of questions I never had to answer.

Were those the good old days? It's tough to say; we didn't rate things back then. Stars weren't doled out and our feedback was not appreciated. Mostly, we sat in silence. We didn't have infinite scroll. We reached the end of the page and then it was done.


Enjoy this? Please consider a paid subscription – $5/month or $50/year – to receive exclusive content and show your support. (There's also a free plan.) Thanks.

Things You Are Wrong About #8: Salt Bae

Finally, a ponytailed chef with pecs and sunglasses who commits health code violations tableside! Yup, he pours salt off his hairy forearm and elbow right onto your meal. Hell yeah, true disruption.

I love this because my favorite thing is having people dip their forearm and elbow in my food right before serving it to me. The only bummer is he has to wear gloves now so you're deprived of his hand flavor. The Department of Health is such a buzzkill. Damn deep state!

See, the key to being a good chef is killing it on Instagram. You don't want a chef who spends all his time in the kitchen when he could be spending the entire meal service taking photos with people who don't understand food but want Instagram Likes.

Plus, he wears dark sunglasses while indoors. Because you want a chef who can't really see that well while he's cooking your meat...⠀ ⠀ Salt Bae: "This filet is burnt!" ⠀ Sous Chef: "Uh, no, chef. It's not. We haven't even started cooking it yet." ⠀ Salt Bae: "[Takes off sunglasses] Oh, right. Well, start cooking it then. Why yes, I do have time to pose for a photo. [Puts on sunglasses]" ⠀ ⠀ But what about the health code violations? Pfft, they just show he's a bad boy. Get filthy, Bae. Rules are for suckers. Did I mention dinner for three at his restaurant costs $521? What a bargain! I mean just look at those reviews. (Actually, don't.)

Now just gonna tuck my napkin in and YES, I just got one of Bae's forearm hairs on my plate. I knew this was gonna be my lucky night. Once that pesky health inspector takes off, I hope Bae shaves his pubes right in the au jus. No wonder this guy's a viral sensation. (E. Coli is a virus, right?) Viva the Rico Suave of steak!

Cooking with Ruby: Instant Pot Indian Butter Chicken

Speaking of cooking, here's a video of me making Indian Butter Chicken in my Instant Pot while telling jokes (and hoping I don't blow the whole place up). Watch and see if I make something delicious or get arrested by the FBI. Warning: The mortar and pestle may make you horny.


Yeah, the vaccine is finally here! However, I'm a bit weirded out that it's the "Pfizer/BioNTech" vaccine because, well, BioNTech? Um, I'm no conspiracy kook but that totally sounds like it'd be the evil corporation in a Robocop reboot.

Also, it's interesting here in NYC because indoor dining is closed while gyms remain open. On the plus side, that means it's the perfect time to open up a fine dining gym! "Crunch by Thomas Keller. Tonight's special: Butternut Squats." Hey VCs, are you listening?

It's also gonna be an interesting time ahead for comedians: 1. Comedy clubs = closed 2. Nursing homes = first to get the vaccine 3. Comics gonna start doing nursing home tours in 2021 Related: I think we should refer to senior groupies as soupies.

Podcast: Marie Kondo and nonviolent communication

Hell & Wellness:Ep 5 // Nonviolent Communication, Bragg's Apple Cider Vinegar, & Marie Kondo

I Marie Kondo'd my whole apartment; She's a bit insane, but I've now got everything rolled up and standing vertically in my drawers and I gotta admit, it's nice. Also, my therapist told me to read this book about nonviolent communication. Apparently, when I say what's on my mind, I'm some sort of verbal arsonist and following this book's suggestions can help.

You've got to sneak your NVC into conversations though. Otherwise, you wind up saying, "I want to practice nonviolent communication with you." And that can be rather off-putting since it implies you're actually thinking, "I really want to communicate violently with you right now, but I just read this book so..." Phew, talking is tricky! Listen to this episode of H&W to learn all about it.

Support the cause! Subscribe to the Rubesletter... 👇👇👇


1) Why Facts Don't Change Our Minds by James Clear. 'The way to change people's minds is to become friends with them, to integrate them into your tribe, to bring them into your circle. Now, they can change their beliefs without the risk of being abandoned socially.'

2) Jeff Wright does a hilarious job of personifying tech companies/social networks in his quickie vids. Here's Quibi getting shut down and Fleet or Story?

3) Representative Adam Kinzinger, Republican of Illinois, on manliness, sportsmanship, and politics: "I want to be clear: the Supreme Court is not the deep state. The case had no merit and was dispatched 9-0. There was no win here. Complaining and bellyaching is not a manly trait, it's actually sad. Real men accept a loss with grace."

4) We Asked People to Sum Up Their Worst Dates in Six Words (Vice): 'She wrote a zine about microaggressions.' -Allie, 27 Oh no. 'Didn't know what a meme was' -Eve, 23 Ugh. 'Broke and bitter stand-up comic.' -Alix, 33 Wait a minute.

5) Idiot compassion is a term that's new to me (because I'm a bit of an idiot). "Wise compassion, action that is inherently skillful, sees the whole situation and aims to bring release from suffering; its opposite is known as blind or idiot compassion, which does not take into account the whole situation and so, while appearing compassionate, is inherently unskillful and may actually increase suffering. For instance, idiot compassion occurs when we support or condone neurosis, such as giving a slice of cake to an obese friend. Yes, they may be begging you, but realistically you know that it will do them no good."

Bonus: What TikTok Taught One Stand-Up Comic. Great to see my pal Carmen Lynch going viral and explaining what she's learned along the way: "Captions in black draw more eyeballs for her than red ones. And hashtagging doesn't always benefit her. Also, TikTok is quicker to censor than Instagram or the other platforms.' Related: I've really been enjoying not learning TikTok. Do I need to learn TikTok? Sigh.

That's it. Thanks so much for reading. And again, please consider subscribing or telling a friend. Love ya.


P.S. ICYMI: Last week's Rubesletter discussed conspirituality (when cuckoo meets woo woo).

The end credits

First timer? Subscribe here and check the archive.

Support via Venmo (@rubymatt) or PayPal. Paid subscriptions also available for $5/month or $50/year.

About Matt Ruby: I'm a standup comedian, the creator of Vooza, and co-host of the Hell & Wellness podcast.

Contact me via email at [email protected].

About the Rubesletter: Musings from a standup comedian and startup veteran. Topics include comedy, tech, politics, wellness, pop culture, and more. Sign up to get a weekly fix in your inbox.

Matt on Twitter

Matt on Instagram

My latest standup special is free on YouTube. And you can stream my standup albums "Feels Like Matt Ruby" and "Hot Flashes" too.

All Comments: [-] | anchor

jeffbee(10000) about 20 hours ago [-]

The thing that rings out for me in this essay: 'Once plans were made, I showed up without any further contact to check whether we were, in fact, "still on for tonight,"' I miss that. Used to be that if you made plans to meet someone, they would come to that time and place. Amazing! And the difference is not even limited to your friends and dating. Now we can't even count on regularly scheduled activities to happen as planned. Public schools cancel classes with a voicemail and less than an hour's notice, under the assumption that everyone will get the message.

r00fus(10000) about 20 hours ago [-]

(Pre cellphones, early 90s era) My problem with that is that my friend group were not exactly punctual (myself included). So that meant a lot of waiting around, and in some cases, your group may not show up. Being near a pay-phone or begging a phone call from an establishment helped some.

Also - you remembered your friends phone numbers (or had a black book).

asdff(10000) about 18 hours ago [-]

Even in the 90s you'd be glued to the local weather report on TV and get maybe 30 minutes notice before the bell whether school was off or not. I remember them not even calling sometimes and having to see the anchor announce the closure.

ladyattis(10000) about 18 hours ago [-]

>I assumed the police were telling the truth.

I gotta laugh at this. Being someone that's from a working town and being from a working class family, cops weren't to be trusted. Cops lied even back then. Life wasn't better back then. The fact he seems to mock gender fluidity and polyamory really shows how much has changed for the better. People who didn't fit in with the gender constructs then were out of luck. Either you were forced to be lgb or if you were trans you had to fit the gendered mold; no androgyny unless you're doing it as part of a musical act.

chadlavi(10000) about 18 hours ago [-]

See also 'being elite was good' and 'I wasn't polyamorous.' It's a whole article of 'I only knew how to be a vanilla white guy. And that was a good thing!'

d0gsg0w00f(10000) about 18 hours ago [-]

Why is polyamory good?

gopalv(10000) about 19 hours ago [-]

> Were those the good old days?

I've heard this from several generations that the time when you were still under the protection of parents, but not their attention are the 'good old days' of your life - like if your biggest worry if you flipped a car was 'My dad will kill me!' and not 'phew, not a scratch on me & my friends are all alive'.

Must be the youth and opportunity of that phase of life rather than the actual era in the world (just look at a '2007 was the best year in video games' for an equivalent for a late millenial).

The music was better, the cereal crunched better, all your friends lived nearby & were always free to hang out, the TV shows were made for your eyes and talking about your dreams was the thing you did without any irony.

Also there was a lot that affected you that you just didn't know yet. You weren't even aware of your ignorance & all knowledge was just within reach.

> I didn't think about wage gaps, redlining, gerrymandering, or the intricacies of romantic encounters.

> Things weren't fluid and there was no spectrum. I assumed the police were telling the truth. I was unaware of how frequently powerful men answered the door wearing nothing but a towel.

Oh, there was definitely a spectrum (Rain Man came out in the 80s). Rodney King was before the iPhone. LBJ was already showing people how everything in Texas was bigger (Doris Kearns Goodwin has a laugh about it, but we'll never know if she cringed).

I'm too young to remember all this, because it was before my time, but I sort of went into the part 2 of 'We didn't start the fire' here.

pram(10000) about 19 hours ago [-]

It's practically nonsensical. I had a 'dumbphone' up until 2011. I'm not even a luddite, I just did everything on my computer (and still pretty much do) because the general early phone OS experience was vastly inferior. 'Life before smartphones' was thus almost essentially the same going back to like 1996. I still spent all day on the internet, except now I can read it while I take a dump I guess.

planet-and-halo(10000) about 16 hours ago [-]

Yeah, doesn't it seem natural that being around a persistent social group and spending the majority of the day hanging out made us happy? Seems like pretty much what we evolved for.

chadlavi(10000) about 18 hours ago [-]

The part about cosmetic surgery seems gratuitous and unnecessary/off topic.

websites2023(10000) about 18 hours ago [-]

I think that was meant to call out the highly modified people that are shown on Instagram. I don't use Instagram, so that comment stuck out to me as well, but I see where the author is coming from.

tboyd47(10000) about 18 hours ago [-]

Things I enjoy about my life now, after having ditched my smartphone:

* Being fully into a stimulating conversation with friends for an hour or more, without constant interruptions for people to check their phones and then change the subject to something irrelevant and mildly infuriating.

* Being able to go for random/spontaneous drives with my family in the car and negotiate where we're going next without having to pull over to update Google Maps or hear the phone bark 'Make a U-Turn' a zillion times until I do.

* Looking forward to social interactions being events where I enjoy doing things I like with people whose company I value, instead of a constant stream of bullshit, dueling ego trips, and disconnected conversations where both parties spend more time correcting the other person about what they meant than actually saying something.

* Waking up in the morning earlier than planned, full of energy, then sipping my coffee slowly as the sun rises, because I got good, restful sleep last night.

* Not having to face surprise criticism about something I said X days ago, because most of my statements are verbal, only reach the people they were intended to reach, and are not preserved after that.

* Not having to spend more time taking pictures of an activity than actually doing the activity.

* Not being in a constant state of annoyance all the time due to a distinct lack of time spent correcting typos, fat-fingering menus, waiting on things to load, and dismissing popups.

* Tasting true solitude (not just alone time) once in a while.

I could go on and on.

jasonlotito(10000) about 17 hours ago [-]

That sort of ignores the rest of the article outside the headline though.

paxys(10000) about 16 hours ago [-]

The second paragraph is funny to me:

> I memorized phone numbers, jotted things down in notebooks, had conversations with taxi drivers, talked to random people at bars, wrote checks, went to the bank, and daydreamed.

I did all of these (except the phone number one) in the last week.

More than technological advancements or anything else, all of this nostalgia is really just about getting old.

'The human civilization peaked when I turned 12 and started declining when I crossed 25. I pity today's youth.' – every generation ever.

p_j_w(10000) about 15 hours ago [-]

I don't get the sense that the author is trying to say that the past was better. Although I could be misreading you and you're not intending to say that he is. I suppose this is another consequence of getting old.

freshdonut(10000) about 16 hours ago [-]

Have you been to a college campus in the last two or three years?

There is a serious smartphone addiction problem. It is seriously worrying to see so many of my peers craning their necks, starting at their phone for hours on end. On the bus, in class, while hanging out, it is an observable fact that everyone is almost always on their phone.

I personally believe we are in a watershed moment for human civilization. The harms of this smartphone addicted world will snowball down into later generations who have never known a life without every need catered for and every boring moment seized by entertainment.

karaterobot(10000) about 14 hours ago [-]

> More than technological advancements or anything else, all of this nostalgia is really just about getting old.

The other side of this is that only people who are older can actually notice when things have changed. So, of course it's older people who talk about it the most.

throwaway984393(10000) about 9 hours ago [-]

I don't pity today's youth. I do pity myself for not having young knees anymore...

underlines(10000) about 5 hours ago [-]

Prime example of how our brains trick us with the rosy retrospection cognitive bias.

'Everything was better in the past.'

It's also closely related to declinism.

A rather dangerous development is the right wing's favourite narrative of 'everything is getting worse in our world', which is in fact the opposite of what we measure with most indicators like corruption perception index, GDP per capita, happiness index, etc.

I think cherry picking negative aspects of today's world and compare it to positive aspects of the past draws a very subjective picture of how the past was.





ImaCake(10000) about 5 hours ago [-]

There is of course a classic xkcd about this [0]. I think these help a little with perspective. But my favourite is the rise in syphilis cases leading early psychologists/psychiatrists to think we were in civilizational decline because so many people would go insane from stage 4. You can read a nuanced and thoughfull take on this idea on page 6 of 'The Mind Fixers' which is a pretty academic book about the history of psychiatry.

0. https://xkcd.com/1227/

sylens(10000) about 15 hours ago [-]

I think there is definitely something I miss from the pre-smartphone era, and that is that the Internet was something akin to an appointment activity. You 'signed on' in the morning and maybe again after school or work. Logging into AIM was like broadcasting to your social circle that you were home and free to chat. You welcomed the instant messages, the interruptions, the socialization - because you knew you could sign off and be unreachable again.

I think that era of having widespread, but not ubiquitous, access to the internet is a time period I would like to have back. For every useful or Maps or food delivery application on my phone, there's three more that steal my attention with an unwanted notification

Twixes(10000) about 15 hours ago [-]

Can't you uninstall or limit the apps that try to steal your attention?

majjam(10000) about 5 hours ago [-]

Im planning on getting an old-school non-smart phone to take with me sometimes, just to disconnect without being uncontactable for this reason.

dexterhaslem(10000) about 14 hours ago [-]

disabling notifications on apps goes a long way. and they all clearly hate it and bug you to turn em back on constantly

krylon(10000) about 19 hours ago [-]

Now I feel old. I remember vividly running around with my first camera, looking for objects worthy of being photographed. The film cost money, so did developing it into pictures. I really had to weigh the pros and cons of taking a particular picture. And in a class of ~25 kids, I was one of three who owned a camera. Not that it was such a luxury item, but most people weren't into that.

These days, (nearly) everyone carries a camera around all the time, and one that is quite probably much better than the one I had in 1992. They can take dozens, even hundreds of pictures without breaking a sweat, and it does not cost anything.

Nostalgia is a very warped mirror. Back then, I did not miss the ability to take dozens of pictures at no cost, because the option did not exist. Was it better? Worse? Neither, I think. But this is the first time I feel old and appreciate it for the history I have lived through. Getting old is weird, but it sure is interesting. (For reference, I'm 40. 'That's not old', I hear someone say, but I have never been this old before, so for me it's all new.)

Gauge_Irrahphe(10000) about 5 hours ago [-]

Smartphone cameras are nowhere near analog photos, there is just no way to make a good camera so small.

foobiekr(10000) about 9 hours ago [-]

As my first camera was a Kodak Disc, I have the nice experience that literally every digital camera has been better than my first film camera.

JohnTHaller(10000) about 13 hours ago [-]

One thing I've noticed is that while everyone can take photos and video anytime they want to, many folks simply forget to. How many people have you seen who have timelines of only selfies, food, and/or cats/dogs. I've been making a point to take just a few photos or videos while spending time with friends after a show and many of my friends are thankful when I do and share them (directly, not online and tagging them).

agumonkey(10000) about 2 hours ago [-]

There was a short doc about a young analog photographer who said that the freeness made him hoard shots. And that the film development plus the necessity of choosing shots made the whole activity a lot deeper for him.

I deeply believe that we need structures and limits otherwise it's too easy to become metaphorically obese.

technological(10000) about 14 hours ago [-]

I wish my parents had smartphone camera when I was small because amount of pics/video I capture of my kid and feel so excited about the thought that he could view his entire childhood . I remember few things from my childhood but it would been really fascinating to view that

vmception(10000) about 6 hours ago [-]

You can still do that if you want. Some people like the limitations and finality.

xtracto(10000) about 15 hours ago [-]

Back in 1992 when I was 10 years old we went to Disney World with my family (as a middle class Mexican family, that was one of 2 out of the country trips in our childhood).

My brother (2 years older) and I had a mechanical camera with rolls of I think 12 or 20 photos (with this https://en.wikipedia.org/wiki/110_film ) . We've got at most 30 pictures of that trip in our family albums. I wish we had taken more pictures as my memory of the trip has faded away quite a bit.

progman32(10000) about 18 hours ago [-]

I remember back when I got my first digital camera, people would routinely ask me 'ok but how do you look at the pictures? Do you just print them out?'. Looking at them on a screen was almost unfathomable.

Nowadays, a physical album seems to have taken the place of your camera in the 90s. Not quite a luxury item, but you'd have to be 'into that' to go to the trouble of making a physical album.

lordnacho(10000) about 18 hours ago [-]

Regarding photos:

People our age only have a few childhood pictures, and they are warped by time on analog media. Those pictures of us as a kid look really old because they are naturally filtered. Soon people will wonder WTF old-pic filters are for, and some historian will have to explain why it's blurred and the colors are faded. Also why did people have clothes for each decade?

Our kids, by contrast, have had pictures taken of them every week at least. With metadata so you know where you were. And they're digital images that won't fade. When our kids are 40, they can look at an archive of how they looked pretty much every week of their lives. Not only that, they can already search the archive for particular situations.

throwawayboise(10000) about 11 hours ago [-]

It's weird, that even though I have a camera with me all the time now, I take way fewer pictures than I did in 1982.

gdubya(10000) about 5 hours ago [-]

Exactly! We're never old, we're new, constantly. :)

elwell(10000) about 15 hours ago [-]

> but I have never been this old before, so for me it's all new

How poetic

theyellowkid(10000) about 15 hours ago [-]

Old is owning a PlayTape Music Machine.

jhgb(10000) about 16 hours ago [-]

> These days, (nearly) everyone carries a camera around all the time, and one that is quite probably much better than the one I had in 1992. They can take dozens, even hundreds of pictures without breaking a sweat, and it does not cost anything.

...and despite that, pictures of UFOs are as awful as ever. ;)

rapnie(10000) about 17 hours ago [-]

And he doesn't even mention that you could just be outside, and be unreachable and not able to reach other people too.

As a kid I used to play outside a lot, and my mother had no clue where I was, nor could she easily find out. I could be outside all day without her worrying that I'd be abducted or involved in an accident.

Now that all has completely changed, and my mother has too. Some years ago when I walked into the hallway of my house I coincidentally noticed a lot of people in front of my door. So I opened it, and it was the police that was about to bust the door with a battering ram. As it happened I hadn't answered my phone in a couple of hours. After multiple calls unanswered, my mom had called 911 on me. And my doorbell was broken, police didn't even knock.. they wanted the action, probably.

I was just freaking programming with the deep-work-destroying phone thingy on silence (where it should be most of the time, imho).

galfarragem(10000) about 19 hours ago [-]

Maybe is just me but I read this as a satire of modern days and not as a satire of the old days. Most things the author hints as awkward in the past, the "optimized" version sounds frivolous.

I'm not a zealot of old times but if we are honest with ourselves we realize that most new stuff is crap. 90% somebody said. Few changes are net improvements.

JKCalhoun(10000) about 18 hours ago [-]

I'm pretty sure, on the whole, it was a satire of modern days.

But I appreciate the balance of the author, reminding us that there were shortcomings: always trusting police comes to mind.

kernoble(10000) about 20 hours ago [-]

The unmentioned thing here is why? Why does the world with a smartphone and today's hyper-connectivity seem so different now compared to what it used to?

Did people feel the same way when the railroad and other forms of rapid transport showed up?

What makes things feel so different? Is it more competition, and for what? Is it that things are just faster, and the certainties have changed? Has it fundamentally changed how we experience relationships with people?

Are our standards now higher, and is that a good thing?

mojuba(10000) about 19 hours ago [-]

Something I've been wondering about lately:

In the pre-Internet era, rumors, incidents, conspiracies, book and movie opinions were passed on verbally. I saw an article nobody else in my circle ever read; a friend watched a movie nobody else is going to see any time soon, etc. There were endless opportunities to get together and talk.

We were each other's Internets.

Do people talk less these days? I certainly do but that might be due to my age. But I'm genuinely curious if the topics of conversations are as intellectually fulfilling as they used to be.

travbrack(10000) about 19 hours ago [-]

Because we went from living in a world where information about the world around us was hidden, to suddenly having access to all of it. It's surprising, jarring and overwhelming and it's probably going to take multiple generations for people to figure out how to use it effectively, ignore the noise, and deal with the social issues created by it.

Damogran6(10000) about 19 hours ago [-]

I still occasionally marvel that I can wake up at 4:30am in Denver and be in Very Rural Virginia by 5pm...and that's with stopping over in Atlanta first.

It's not the travel, or the time, it's the 'it's more efficient to go thousands of miles out of the way due to logistics.'

This has not been particularly new, but I can still marvel at it.

Having fixed plumbing, I'm reminded that the current iteration is as a result of 2000 years of refinement.

I have a lathe, manufactured in 1966, it still holds tolerances, and I refer to a book (how to run a lathe) that's first printing was at the turn of the 20th century (1912 or thereabouts)

Old stuff is remarkable, too.

mrweasel(10000) about 18 hours ago [-]

> Did people feel the same way when the railroad and other forms of rapid transport showed up?

To some extend yes. When I was a kid, we'd rarely go to the only major city in our part of the country. Maybe two times a year. Now I live in that area, but I can easily go visit my parents for dinner, just because someone decided that a motorway was a great idea. It cut somewhere like 40 minutes to an hour of the drive.

It still boggles my mind that 30 years ago we considered it a day trip, but with a shorter distance, faster speeds, in a better car, it's just a quick drive, allowing my daughter to see her grandparents way more often than I saw mine.

Ajay-p(10000) about 20 hours ago [-]

I have never truly known a period of time without a smart device. The last watch I had was when I was a small child and it was only a few years ago I found a street 'atlas'. I have a feeling that I've missed a building block of the digital age by not experiencing an evolutionary phase.

tboyd47(10000) about 20 hours ago [-]

You can always go back, any time! Get a dumb phone.

It's great. Engage with others more authentically, more meaningfully, and more consistently.

quartesixte(10000) about 20 hours ago [-]

The pre-LTE era was quite the experience as a teenager, and it wasn't until the iPhone 5 when smartphones became a ubiquitous and affordable thing (or justifiable) for many of my friends. I didn't have a smartphone until my junior year of high school!

SMS feature phones like the sidekick, with physical keyboards, ruled the day and many of my classmates actually disliked smartphones because of the lack of physical keys!

4gotunameagain(10000) about 19 hours ago [-]

Aligning the infrared ports of two phones until max speed was achieved and being extremely careful to not move them for minutes which felt like ages to just send over a polyphonic ringtone..

And if the bell rung, well, you were out of luck

pugets(10000) about 17 hours ago [-]

What I miss more than anything else is having an attention span. Years of abusing social media has left my brain pinballing all over the place. I am a collection of unfinished thoughts. Even as I write this, I can feel my mind needing to latch onto something new.

bigpeopleareold(10000) about 3 hours ago [-]

Just makes me think: When I get to this point (rarely if ever though with social media), I feel guilty. I possibly spend too much time on the computer, but I give myself these options if I am going to continue to use it (during spare time): learn a new thing related to my interests, write toy programs, or put it away and spend time with my wife, all if not at work.

nanidin(10000) about 14 hours ago [-]

I've been suffering the same thing over the last few months. A helpful technique for me has been to swap out my smart watch for a dumb watch, and to put my phone in a drawer unless I intend to use it.

I also heavily limited the types of things Facebook will send push notifications for. It used to be that if I got a notification, it was because one of my friends actually interacted with me in some way. Now I get a bunch of junk notifications that I feel are designed to pull me into the app and not really inform me of anything, to get me back to scrolling a feed. Like I'll get a notification that someone I don't know made a post in a group I've been in for years without ever getting a similar notification in the previous years. So I basically turned off everything that doesn't involve my actual friends doing something relevant to me.

collinvandyck76(10000) about 16 hours ago [-]

I recently quit all social media. There was a bit of withdrawal but I can confirm that my attention span has started coming back.

Otek(10000) about 16 hours ago [-]

Yeah, mindfulness is promoted as a cure for that but I'm not sure. Right now I'm pretty mindful in random daily occasions but it just gives me more depression and overthinking. When I'm turning that off for some days I'm... more happy? But still miserable. I don't know what to do

LeftHandPath(10000) about 11 hours ago [-]

I've started weening myself off of everything that's instant-gratification. No reddit, no imgur, no short-format news stories or list articles. A week ago I drove 9 hours for a camping trip and spent several days without my phone and smart-watch. For several months I've made a point to walk at least an hour a day (to go about 5 miles) without looking at my phone -- but I still wear my watch to track the distance. I still feel like I have to have some form of audio going in the background - maybe something educational, maybe ASMR - while I'm browsing hacker news. If I play a game, I still choose one without a narrative so that I can listen to a podcast while I play. I'm not sure that any of these habits are beneficial.

I think Nicholas Carr had a great point in The Shallows (2010) [1] -- our brains have a lot of plasticity, even into late adulthood. The way we use the internet probably has a much larger impact on the way we think than we are currently willing to acknowledge. There is a healthy way to integrate electronics into our daily lives, but I don't think many of us have found it.

[1] https://en.wikipedia.org/wiki/The_Shallows_(book)

random_kris(10000) about 16 hours ago [-]

What to do to fix this?

luxurytent(10000) about 13 hours ago [-]

As a parent I've flip flopped between leaving my phone in the bed room / office during the day to flipping through TikTok while my kids crawl over me because my brain is so fried and all it wants is some dopamine hits to help get through that day.

Thankfully, crawling out of the first year of our second child and sleep, routine, etc. is all getting easier (not being in COVID lockdown helps too) and I'm finding it more common for myself to leave my phone in the bedroom while I enjoy my day with the kids.

I've also realized that the sole reason I bring my phone to the kid's park is in case I need to contact my wife, or vice versa. I've been tempted to get a smart watch w/ cellular just so I have less bulk to carry around, but a 'dumb' phone may be just as sufficient ...

bovermyer(10000) about 16 hours ago [-]

There's something to be said for leaving your phone at home, driving to a park, and just walking around for a few hours.

PragmaticPulp(10000) about 15 hours ago [-]

As a parent, I've been watching this play out in real time among other peoples' children.

Most parents I know are deliberate about limiting screen time and ensuring their children don't substitute screen time for other activities. It's actually not that difficult to do so as kids are really good at finding entertainment in their environment even without electronics.

However, some parents give their kids all the tablet, TV, and phone time they want. As they grow up I can see them failing to learn how to play with others their own age because they'd rather reach for a screen than make an effort to do something. They can be frighteningly grumpy when separated from their electronic devices and can even throw tantrums until their parents cave in and give them more screen time.

FWIW, I've also watched parents reverse this trend by slowing weaning their kids off of screen time and substituting other entertaining activities. It doesn't take a whole lot to nudge people in the right direction, but putting that phone down and doing literally anything other than stare at a screen can be a difficult first step to take.

robohoe(10000) about 16 hours ago [-]

This one is a tough one and I can relate. I haven't been able to finish a book or work on any labs or FOSS work in years now. I reach for the phone even when I get a moment of downtime. The addiction is strong and now I as learn to be more mindful I realize how common place it is for everyone.

baliex(10000) about 4 hours ago [-]

> What I miss more than anythi

And then I collapsed your comment. It's worse than I thought.

zz865(10000) about 17 hours ago [-]

Me too, congrats on completing that second sentence. :)

theyellowkid(10000) about 15 hours ago [-]

> What I miss more than anything else is having an attention span.

I need to look for more examples of the art of the future. The one paragraph short story (4chan greentexts I guess), the 20 second hit single.

maerF0x0(10000) about 17 hours ago [-]

> had conversations with taxi drivers, talked to random people at bars,

Returning to these kinds of adhoc social interactions has be instrumental in helping breaking my isolation and depression. Friendly chit chat with a barista, say hello to anyone who isn't obviously avoidant, asking to pet their dog etc.

Not wearing headphones has also been an important thing because it means I'm instantly available for interaction if someone should say something, or if I want to.

JoeAltmaier(10000) about 17 hours ago [-]

Curiously, almost everybody on the hiking trail I bike has the telltale white lozenges in their ears now. Pre-Covid it was something like 25%.

So trail talk has been reduced to nearly zero. I call out 'On your left!' as I pass, but still folks can be startled as I drift past. And forget saying 'Good morning!' and getting a response.

It will take years to undo the changes done by this past year.

jokoon(10000) about 16 hours ago [-]

I bought a smartphone something like 2 years ago and only started paying for wireless data 1 year ago, because I was tired of being stubborn about it, I really felt I was excluded from many things.

I have to admit I often go on reddit when I have some time, but I don't go on instagram or facebook.

Sharing videos on whatsapp when you're in rural areas is crazy, and to me it's really pointless, even though I like technologies.

It's hard to say if people are less social because of smartphones, since social networks are not so social after all.

Although I'm really curious if avoiding digital social networks would result in an amputated real social life. Online dating really really allowed me to get out more, and I don't feel like there is a good enough equivalent for friendships and activities.

psychomugs(10000) about 16 hours ago [-]

I've owned a smartphone since late high school (~2010) but didn't get a data plan until some five or six years later. I actually miss having to be more intentional about where and when and what I was doing; data feels like an invisible umbilical cable that I can't cut off.

legrande(10000) about 19 hours ago [-]

You can still live like what's described in this article. Get yourself a dumbphone and a paper atlas, only pay with cash, avoid loyalty cards, read paper books, newspapers, etc

Now and then I do that, just to switch off from our hyper-connected world. Switching off is the new peace of mind.

xwdv(10000) about 19 hours ago [-]

You can't really. People think it's simply a matter of getting rid of all the new tech.

It isn't. You'll just be an anachronism. If the entire world isn't living the same way then you are only getting a superficial experience. Instead of living a genuine life, you are merely pretending for a while.

theyellowkid(10000) about 17 hours ago [-]

That's pretty much me, except for reading a couple of websites (like this) , online banking, and pirating e-books for my Kindle.

I'll fire up a machine for video or photo editing once in a while or sheet music work, but otherwise they're not much use.

One problem is being too old to care about video games. When Space Invaders came out I couldn't imagine that people would choose that over pinball or foosball. My loss I guess.

sneak(10000) about 16 hours ago [-]

You can't complete required procedures for international travel right now without a portable web browser and mobile internet.

Ditto for most restaurant menus in a lot of places: they are QR codes now, that point to URLs.

Airlines will only let you book with cards, no cash.

adam12(10000) about 19 hours ago [-]

Your friend won't wait an hour for you, though.

Gunax(10000) about 18 hours ago [-]

The thing about technology is that even if you don't change everyone else will.

Sure, it might still be technically legal to ride a horse down the street. But soon enough, people started putting in multi-lane highways. Also, stores took out their hitching posts. Then we started designing cities around the car, so what used to be a mile away is now 10. And half of that distance is consumed by parking lots.

dwaltrip(10000) about 19 hours ago [-]

What are the mechanics of switching between these modes? Do you just swap SIM cards from the smartphone to the dumb phone?

gilbetron(10000) about 18 hours ago [-]

You can sell your car, get a horse, and travel around like it is the 19th century. Of course, nothing around you will be like the 19th century. Not that it isn't worth doing for other reasons, but moving your tech backwards doesn't move the world around you backwards.

tharne(10000) about 18 hours ago [-]

> You can still live like what's described in this article. Get yourself a dumbphone and a paper atlas, only pay with cash, avoid loyalty cards, read paper books, newspapers, etc

I've been doing this a fair amount lately, particularly when I go on vacation. It's glorious.

There are some inconveniences, for sure, but the good outweighs the bad. Most of what drives my tech use these days is 1) My job, and 2) The social expectations of others. On balance smartphones and ubiquitous internet have benefits, but the bad far outweighs the good. Unfortunately, once a technology is embraced by enough people, you're more or less forced to use it if you want to live in mainstream society.

beamatronic(10000) about 18 hours ago [-]

Instead of a paper atlas, get a Garmin device with a built-in map

don-code(10000) about 19 hours ago [-]

Much like living outside the Matrix, would you really want to go back, knowing what you know?

Yes, you can use a paper atlas. I have an 8-year-old car with GPS, and a 34-year-old car without. I bought a map book for the 34-year-old car, thinking it'd be a 'period accurate' way of driving it; it's anachronistic at best, and frustrating at worst - can you read the street signs, and did you drop your compass under the seat? In the 8-year-old car, I can hit a button, say 'Navigate to (an address)' _while driving_, and it figures it out.

Some of these I do on principle (only pay with cash, avoid loyalty cards, etc.), and I accept a compromised UX as a result. Your mileage may vary, depending on how much you get out of these things, but the immediate impact of them seem generally negative.

m0ngr31(10000) about 20 hours ago [-]

I've been working my way through Seinfeld, and I realized most of the plot lines couldn't have happened if cell phones were commonplace back then.

KineticLensman(10000) about 19 hours ago [-]

Also 'Romeo and Juliet' (tragedy occurs because a message isn't delivered), 'Assault on Precinct 13' (gang cuts the phoneline to a besieged police station) and the opening credits of Terry and June (couple can't find each other in a shopping centre)

jeffbee(10000) about 19 hours ago [-]

You know, that's another thing that has changed a lot. Modern TV writer just can't stop himself from using the mobile phone as a device to advance the story. We have to watch some guy in a TV show sending iMessages. That is so boring, and as a caveman from the pre-cellular era it takes me out of the show and makes me want to turn it off. A recent offender in this regard was the Amazon show 'Bosch'. If you made a supercut of the titular detective answering his iPhone, it would be almost as long as the series itself. This is particularly irritating since the Bosch novels were written in the 90s, before the smartphone era, in the car-phone era at the latest.

munificent(10000) about 19 hours ago [-]

I love how every horror movie in the past ten years has an obligatory scene to establish some flimsy reason why their cell phone doesn't work.

mateo411(10000) about 19 hours ago [-]

There's a modern Seinfeld twitter account, which has a bunch of zany plot lines that are only possible in the smart phone era.

codersteve(10000) about 18 hours ago [-]

I think a lot of movies now take place in the past to remove the mobile-phone problem.

KineticLensman(10000) about 19 hours ago [-]

A couple of things not mentioned:

* Half of the population watching the same episodes of the same programme simultaneously. If you missed an episode, it was gone forever

* Inane arguments in pubs about facts that couldn't be instantly googled.

ASalazarMX(10000) about 19 hours ago [-]

> * Inane arguments in pubs about facts that couldn't be instantly googled.

I don't miss those. Before the Internet, if it wasn't on an encyclopedia and you weren't at the library, you had no way of corroborating information.

On the other hand, since facts are so accessible right now, those arguments have shifted to voicing their feelings and wishes because those can't be falsified.

mjhagen(10000) about 5 hours ago [-]

Who was that one actor in that movie?

I don't know.

Ok. Another round?

coding123(10000) about 19 hours ago [-]

Often if you missed an episode you had one more chance usually either at 2 am or during the day time the next day.

Broken_Hippo(10000) about 2 hours ago [-]

I don't miss these arguments: I much prefer a short stint of BS, followed by searching for facts - which is often followed by a different conversation. Before it was just folks trying to convince others that they were right with no conclusion.

kenjackson(10000) about 19 hours ago [-]

Another thing was regional music. I used to hear about GoGo music in DC, but I couldn't get access to it on the West Coast. People who had access to lots of different music were the music elites. I worked in college radio and set up the college's first Real Audio server and that was a game changer (we were admittedly late to the game, but the whole notion was a game changer).

And now music really knows no bounds.

ectopod(10000) about 17 hours ago [-]

My local had the Dunlop Book of Facts as an inane argument ender. Although some people considering it cheating.

jon-wood(10000) about 16 hours ago [-]

I have a rule when in pubs - no Googling answers to the inane argument. Inane arguments in pubs aren't about getting the right answer, they're about the tangents you end up going down from them, something that's lost if you can just immediately get the answer.

imhoguy(10000) about 17 hours ago [-]

> If you missed an episode, it was gone forever

I set VHS recorder just in case.

sosuke(10000) about 19 hours ago [-]

Just a random thought you inspired. If any 'collective consciousness' ideas are grounded at all in reality does missing the shared episode experience negatively change it?

I remember that the Game of Thrones red wedding event still seemed to happen at the same time even though it was re-watchable on demand.

irrational(10000) about 7 hours ago [-]

My teenager doesn't have use of his phone right now. He was complaining that he had no way to contact his friends. I pointed to the landline and he said that wouldn't work since all their numbers were on his phone. Plus they would never actually answer a phone call - who answers phone calls? I suggested email. None of them check email. I suggested riding his bike to their house and talking to them in person. He was aghast at the thought. If smartphones went away, I think there is an entire generation or two that really wouldn't know how to go on living.

Broken_Hippo(10000) about 2 hours ago [-]

You are asking your child to behave like you did while growing up, without taking into account that your son would be doing things considered rude now.

I'm a fully grown adult over the age of 40. I don't answer my door if I'm not expecting someone. They can call from my driveway if they want. I don't answer phone calls in general, and hate talking on the phone. Few have issues with this as they understand that house visits and phone calls is me asking you to stop what you are doing and talk to me, someone who is just going to talk about something minor that can wait if you are busy. Email is just a letter, cold and impersonal, and isn't a way to communicate with friends, though I'll use it from time to time if it is my last resort.

I'm sure they'll adapt if smartphones actually went away, but that's not what is going on with your son.

paulvs(10000) about 19 hours ago [-]

This is eerily similar to videos forwarded on WhatsApp about 'things you'd only understand if you were born in the <insert decade here>' :)

I did relate to it though. It would be apt for a video about things you'd only understand if you're a millennial.

Damogran6(10000) about 19 hours ago [-]

I could see it being a pretty nifty series on the History Channel. 'If you were 120 years old, this is what you'd remember from when you were 20'

WillEngler(10000) about 20 hours ago [-]

Contrary to the author's claim, the photo booth at Rainbo Club is still there.

pbrb(10000) about 13 hours ago [-]

Shocked me to see Rainbo referenced. The photo booth is still there.

ksec(10000) about 17 hours ago [-]

Not really to do with Smartphone, but before the Smartphone era, having a phone call is pretty damn nice. Now it is all robocalls. I dont even remember the last time I had a real person calling me. The people I know, or even those I dont such as job agents, will leave a whatsapp message.

I also used to think having a decent digital Camera on a phone would be insanely great. It turns out not so, at least not any more. Every single god damn digital photo are now either some stupid 'computational' photography enhanced, some are enhanced with AI or whatever Machine Learning. Apple used to be on the realistic camp but now even they are joining the profile of instagram generation. ( I have been told customer want these sort of features as they think it is better photo, and sell better ) And if that is not enough most of them are posted with some editing or filters. To the point nothing in the photo I saw is real.

I remember dreaming about online MMO on a phone. That was UO / World of Warcraft era. May be in ten to fifteen years time a Pocket Computer with Wireless Network. That would be so much fun. But gaming now has becomes a casino with lots of gambling options to win the game. It is also time sink for many of us to escape into the 'metaverse'. ( Metaverse is the new VC hype of 2021 ). They are no longer the same.

I remember I really really wanted IRC, ICQ and later MSN on a Smartphone. It didn't work. I have to hack an O2 Atom ( made by HTC before they become a brand of itself they used to be a ODM like Foxconn ) and it was a battery drain. Now it is largely replaced by all sort of instant messenger. But we dont give ICQ numbers or your MSN handle to anyone anymore. It is all 'phone' numbers. So it is more of a 'real world' connection rather than some 'Internet identity' we used to have.

Speaking of O2 Atom, I have been looking for a smartphone, or a pocket computer that uses 2G GPRS as Data Connection and slowly browse the Internet on the street. So I could sit in a Cafe and use some remote server that will send me the .MHT version of a website so I only use one connection. ( Multiple connection on GPRS are bound to fail ). iPhone was everything I wanted I remember vividly how Apple nailed it. From capacitive Touch Screen to all the small UX. Most people didn't get it. As it wasn't the first 'Smartphone', you have Nokia Symbian and something like the Sony Ericsson P900 at the time. Most people were like Steve Ballmer and laughing at it. But for those who were looking for it for so long. iPhone was 'it'.

I would have thought with Smartphone, people would listen to even more music, as it replaces mp3, or MiniDisc. Where we curate our own collection very carefully to try and fit in how tiny amount of storage we had. Turns out people are consuming more Video and other forms of Media. Music isn't 'dead', but it certainly didn't bloom like many expected. From a high level prospective, all forms of media are competing for your attention and time.

quenix(10000) about 17 hours ago [-]

Computational photography (what you call 'some are enhanced with AI or whatever Machine Learning') is a perfectly legitimate evolution in the field of consumer photography. I'm not sure why you are so critical of it.

narrator(10000) about 17 hours ago [-]

I didn't see it mentioned, so I'll add it for completeness. One of the differences between pre-smartphone and the post-smartphone world is the pervasive and unlimited availability of adult content. Pornography used to be really hard to get. Now it is unlimited and free. There are now more than 1 million content creators on OnlyFans. The democratization, if you want to call it that, of adult content to me seems to me like something out of gonzo sci-fi

Music is also now almost free and unlimited. Local music died in the early 90s when the telecommunications act was passed and all the local stations got bought and consolidated into national syndicates. Local DJs didn't find bands anymore. There was a real rough spot when the mainstream was dull and it was very hard to find out about alternative music. Later, the internet and Youtube leveled the playing field between the mainstream and everything else and things got better.

fleddr(10000) about 16 hours ago [-]

I'd say pre-internet, not pre-smartphone. Wide availability of free porn has been there since the start of the internet. It precedes smartphones by more than a decade.

Or so I've been told.

krupan(10000) about 17 hours ago [-]

I grew up in Washington and I remember biting into a red delicious apple was like biting into a water balloon they were so juicy. They were always super crisp and crunchy too, big chunks would break off as you bit them. Now when I get them (I don't live in WA anymore) they are always mushy and gross. Is this what we are talking about?

abxytg(10000) about 17 hours ago [-]

IIRC some huge percentage of apples are produced in WA. The ones I get in SW WA from Yakima are still like that.

laurieg(10000) about 20 hours ago [-]

A couple of stand-out memories from the olden days (and I don't consider myself particularly old):

Getting a call in a restaurant. Only happened to me once but I certainly felt like a VIP.

Carrying a tiny map book of London around with me while cycling around. Missing turn after turn until finding there was a canal which basically took me from the center to my uncle's house.

Arranging to meet a friend and then being late. Really late. 1 hour late. He was still there, waiting for me.

varjag(10000) about 17 hours ago [-]

I (legally) smoked on a plane!

xwolfi(10000) about 20 hours ago [-]

Yeah I remember waiting for people - I got a smartphone at 16, in 2004, something like that, so it's hard to really imagine how it was for adults...

My parents told me they spend evenings at the phone booth talking to each other - but even that is ultra convenient compared to my grandparents sending letters :D

But I think it's better anyway - we sample mating candidates more, we cycle through faster, we can stop and try anew nearly any time until 50, and with some difficulty above.

I mean my aunt had a crushing divorce when she had 3 young children and stayed alone working with all 3 until the internet arrived and she could find a partner much faster...

downut(10000) about 14 hours ago [-]

In 1989 I wrote a letter (i.e., mailed) to a friend from grad school (ASU) pursuing her studies at a Northern UK university. I sez we will meet you at the center of Piccadilly Circus at such and such a time, on such and such date. She wrote back, 'of course.' This took a month or so. We flew over, and showed up. So did she. I still remember meeting up: it was no big deal.

We had also written to Czech friends from grad school (U FL) that we would show up in Olomouc on such and such date (Jun 1989, interesting times). They were visiting relatives and we showed up. And were whisked off to 5 days of whirlwind touring the soon to be de-Sovietized Czechoslovakia.

We hosted quite few Eastern Europeans in the '90s, all arranged over snail mail. There was a sense of responsibility that we don't really experience today when dropping in on travels. All the modernity in the world, and nowadays we occasionally get ghosted, even after making repeated prior arrangements using the latest hottest smart phone technology.

I will say this: google translate + maps are the two great inventions we appreciate most. The rest is a solid meh. We have a theory that maximized immediate convenience has an unanticipated effect of atomizing and devaluing some relationships.

Per the parent, I too remember those paper maps while cycling. As in, riding from the Portland Airport to Arcadia and down to LA, using a tour guide, quite tattered at the end. Most of the times before an extended trip (100+ miles) I would memorize the route the night before. This worked fine for 25 years.

paxys(10000) about 16 hours ago [-]

Smartphones or not, being very late for planned meetups definitely hasn't changed as a concept.

lordnacho(10000) about 18 hours ago [-]

IMO the big thing is not being bored. Someone is late for a meeting? Doesn't matter, you've got HN to read. On the back seat for a long drive? Doesn't matter, you can answer emails. Or play a game.

The whole psychologically weird phase of 'hmm I'm here and waiting, and all I can do is watch paint dry' seems to have vanished.

I'm not sure what people prefer more though. Say you're waiting for a date, do you feel best breaking off directly from your reading of the dragon book, or would you feel best just doing nothing until they showed up?

6gvONxR4sf7o(10000) about 16 hours ago [-]

I've been intentionally cutting things like TV or internet out of my life at certain times, and can definitively say I'd rather be bored. All these things I tell myself I want to do are actually not that hard when I'm bored. Writing, drawing, having more conversations with loved ones. It's a lot easier when I can't say 'let's watch the new episode while we eat' or 'I'll surf HN for a bit.' The boredom builds until it finds release, eventually being high enough to do the things I actually want to spend my time on.

If all I can do is watch paint dry, I'll find something else, whether it's rewarding or just mindless dopamine.

That said, I'm totally addicted and cutting out the internet is extremely hard when I sit in front of it for work and my computer and phone are where a lot of the rewarding things are too (e.g. cell phone drains time, but you need it to text friends). I feel like an alcoholic working as a bartender also required to take just a teeny sip of whiskey every time I talk to someone.

mrtksn(10000) about 18 hours ago [-]

I miss being bored. I used to go out of character and explore things when I got bored, now I don't remember the last time when I was bored for prolonged time.

I mean I get bored of a game or an article etc. but I would immediately seek refuge in something else that is easy to reach.

Before constant connectivity, I would have attempt to cure my boredom in much more hardcore ways.

klyrs(10000) about 18 hours ago [-]

> The whole psychologically weird phase of 'hmm I'm here and waiting, and all I can do is watch paint dry' seems to have vanished.

I dunno. Infinite scrolling sure feels like watching paint dry to me... but as a teenager my idea of a good time was finding an isolated bit of woods, and sitting still enough for the fauna to ignore me. Actually, that's still my idea of a good time, but I'm too busy and the woods anywhere nearby are too crowded.

mrweasel(10000) about 18 hours ago [-]

After getting vaccinated I was sitting in the waiting area, for the 15 minutes our suppose to stay, in case you get an allergic reaction and I noticed that most people where NOT looking at their phones. They where just sitting, doing nothing. I did the same, and honestly, it was absolutely wonderful just to have 15 minutes where you did nothing.

I'm not saying I wouldn't get bored just waiting for extended periods of time, but sometimes it's nice to know that for the next 10 - 20 minutes, you just have to exist in this spot, and that's all that's really expected of you.

Pyrodogg(10000) about 15 hours ago [-]

'Apathy's a tragedy and boredom is a crime' - Welcome to the Internet by Bo Burnham[1]

This new song from Bo's covid-lockdown inspired special 'Inside' hits this right on head. The Internet, particularly when paired with mobile devices just tries to suck up so much attention, because that's exactly what we made it do.

[1] https://www.youtube.com/watch?v=k1BneeJTDcU

tester756(10000) about 16 hours ago [-]

You can always rethink your life :P

Sometimes it's good to sit down 15min and rethink stuff

maerF0x0(10000) about 17 hours ago [-]

> not being bored

People weren't necessarily 'bored'-- remember that the boredom one feels when not hyper stimulated is due, at least in part, to adaptation to their peak/typical stimulus.

People probably got sufficient dopamine, from 'less exciting' things such as small talk, looking at the clouds, contemplating the meaning of their life while waiting for an interview to begin etc.

hateful(10000) about 17 hours ago [-]

As someone with ADHD, this has been the most advantageous change for me. Another things I can do now is play a game on my phone during meetings - it may be counter-intuitive to others, but occupying my visual cortex and hands with a simple game allows me to pay attention to what someone is saying without having my mind wander.

I_AM_A_SMURF(10000) about 9 hours ago [-]

What I've been enjoying lately is purposely not taking my phone off when I have a minute or two to kill. Just look around, think about things, it's very relaxing to me.

I do remember however being bored to hell as a teenager, I would not want to live in that world, I really hated it.

jVinc(10000) about 17 hours ago [-]

> 'hmm I'm here and waiting, and all I can do is watch paint dry' seems to have vanished.

Theres several influencers whose whole bit is centered around mixing paint, and I don't know how many channels on youtube dedicated to the sound it makes when you cut sand with a knife. So don't despair, I'm absolutely certain there is a channel out there dedicated to watching paint dry, on demand in byte size vids with tons of userengagement, for those long drives when you just want to pull the plug and watch paint dry.

Broken_Hippo(10000) about 2 hours ago [-]

I'd prefer modern tech.

I've read far too many cereal boxes and shampoo bottles in my lifetime. Instead of getting annoyed at my unexpected 10 minute wait somewhere, I can just play a game. So many minor annoyances, gone due to the little pocket miracle. I can always choose not to pick it up, after all, but at least I have the choice now.

SonnyTark(10000) about 13 hours ago [-]

On paper, I'm younger than most of these but I've lived in a closed-in country under an authoritarian regime that actively blocked technology from getting in as it saw free information as a threat to its survival. When I used the internet for the first time, it was already the era of messengers and mmos. It took several years until I saw the first Nokia mobile phone, they already had a color display one out at the time so it was marvelous. Computers were seen as a luxury, classified the same as typewriters and/or game consoles so very few people understood what they were and fewer had one that's usually a decade or two old.

I've experienced most of what this blog post describes despite the fact it was after 2000! and when change came it was abrupt and violent, almost like being surrounded by a technological theme-park all of a sudden. It was at times funny to see how some people who have never owned anything more advanced than a CRT tv try to figure out how to use a smart phone, I distinctly remember this one older guy who hit the breaks in the middle of the road then pulled over and got out of his car to receive what must have been one of the first mobile phone calls of his life.

selimthegrim(10000) about 10 hours ago [-]

Serbia? Syria?

(574) WireGuardNT, a high-performance WireGuard implementation for the Windows kernel

574 points about 19 hours ago by zx2c4 in 10000th position

lists.zx2c4.com | Estimated reading time – 7 minutes | comments | anchor

[ANNOUNCE] WireGuardNT, a high-performance WireGuard implementation for the Windows kernel

Jason A. Donenfeld Jason at zx2c4.com Mon Aug 2 17:27:37 UTC 2021

Hey everyone,
After many months of work, Simon and I are pleased to announce the WireGuardNT
project, a native port of WireGuard to the Windows kernel. This has been a
monumental undertaking, and if you've noticed that I haven't read emails in
about two months, now you know why.
WireGuardNT, lower-cased as 'wireguard-nt' like the other repos, began as a
port of the Linux codebase, so that we could benefit from the analysis and
scrutiny that that code has already received. After the initial porting
efforts there succeeded, the NT codebase quickly diverged to fit well with
native NTisms and NDIS (Windows networking stack) APIs. The end result is a
deeply integrated and highly performant implementation of WireGuard for the NT
kernel, that makes use of the full gamut of NT kernel and NDIS capabilities.
You can read about the project and look at its source code here:
For the Windows platform, this project is a big deal to me, as it marks the
graduation of WireGuard to being a serious operating system component, meant
for more serious usage. It's also a rather significant open source release, as
there generally isn't so much (though there is some) open source crypto-NIC
driver code already out there that does this kind of thing while pulling
together various kernel capabilities in the process.
To frame what WireGuardNT is, a bit of background for how WireGuard on Windows
_currently_ works, prior to this, might be in store. We currently have a
cross-platform Go codebase, called wireguard-go, which uses a generic TUN
driver we developed called Wintun (see wintun.net for info). The
implementation lives in userspace, and shepherds packets to and from the
Wintun interface. WireGuardNT will (eventually) replace that, placing all of
the WireGuard protocol implementation directly into the networking stack for
deeper integration, in the same way that it's done currently on Linux,
OpenBSD, and FreeBSD.
With the old wireguard-go/Wintun implementation, the fact of being in
userspace means that for each RX UDP packet that arrives in the kernel from
the NIC and gets put in a UDP socket buffer, there's a context switch to
userspace to receive it, and then a trip through the Go scheduler to decrypt
it, and then it's written to Wintun's ring buffer, where it is then processed
upon the next context switch. For TX, things happen in reverse: userspace
sends a packet, and there's a context switch to the kernel to hand it off to
Wintun, which places it into a ring buffer, and then there's another context
switch to userspace, and a trip through the Go scheduler to encrypt it, and
then it's sent through a socket, which involves another context switch to send
it. All of the ring buffers -- Wintun's rings and Winsock's RIO rings --
amortize context switches as much as possible and make this decently fast, but
all and all it still constitutes overhead and latency. WireGuardNT gets rid of
all of that.
While performance is quite good right now (~7.5Gbps TX on my small test box),
not a lot of effort has yet been spent on optimizing it, and there's still a
lot more performance to eek out of it, I suspect, especially as we learn more
about NT's scheduler and threading model particulars. Yet, by simply being in
the kernel, we significantly reduce latency and do away with the context
switch problems of wireguard-go/Wintun.
Most Windows users, however, don't really care what happens beyond 1Gbps, and
this is where things get interesting. Windows users with an Ethernet
connection generally haven't had much trouble getting close to 1Gbps or so
with the old slow wireguard-go/Wintun, but over WiFi, those same users would
commonly see massive slowdowns. With the significantly decreased latency of
WireGuardNT, it appears that these slowdowns are no more. Jonathan Tooker
reported to me that, on his system with an Intel AC9560 WiFi card, he gets
~600Mbps without WireGuard, ~600Mbps with wireguard-go/Wintun over Ethernet,
~95Mbps with wireguard-go/Wintun over WiFi, and ~600Mbps with WireGuardNT over
WiFi.  In other words, the WiFi performance hit from wireguard-go/Wintun has
evaporated when using WireGuardNT. Power consumption, and hence battery usage,
should be lower too.
And of course, on the multigig throughput side of things, Windows Server users
will no doubt benefit.
The project is still at its early stages, and for now (August 2021; if you're
reading this in the future this might not apply) this should be considered
'experimental'. There's a decent amount of new code on which I'd like to spend
a bit more time scrutinizing and analyzing. And hopefully by putting the code
online in an 'earlier' stage of development, others might be interested in
studying the source and reporting bugs in it.
Nonetheless, experimental or not, we still need people to test this and help
shake out issues. To that end, WireGuardNT is now available in the ordinary
WireGuard for Windows client -- https://www.wireguard.com/install/ -- with the
0.4.z series, in addition to having full support of the venerable wg(8)
utility, but currently (August 2021; if you're reading this in the future this
might not apply) it is behind a manually set registry knob. There will be
three phases of the 0.4.z series:
  Phase 1) WireGuardNT is hidden behind the 'ExperimentalKernelDriver'
           registry knob. If you don't manually tinker around to enable it,
           the client will continue to use wireguard-go/Wintun like before.
  Phase 2) WireGuardNT is enabled by default and is no longer hidden.
           However, in case there are late-stage problems that cause
           downtime for existing infrastructure, there'll be a new hidden
           knob called 'UseUserspaceImplementation' that goes back to
           using wireguard-go/Wintun like before.
  Phase 3) WireGuardNT is enabled, and wireguard-go/Wintun is removed from
           the client. [Do note: as projects and codebases, both Wintun and
           wireguard-go will continue to be maintained, as they have
           applications and uses outside of our WireGuard client, and Wintun
           has uses outside of WireGuard in general.]
The leap between each phase is rather large, and I'll update this thread when
each one happens. Moving from 1 to 2 will happen when things seem okay for
general consumption and from 2 to 3 when we're reasonably sure there's the
same level of stability. Since we don't include any telemetry in the client, a
lot of this assessment will be a matter of you, mailing list readers, sending
bug reports or not sending bug reports. And of course, having testers during
the unstable phase 1 will be a great boon. Instructions on enabling these
knobs can be found in the usual place:
[ If you're reading this email in the future and that page either does not
  exist or does not contain mention of 'ExperimentalKernelDriver' or
  'UseUserspaceImplementation', then we have already moved to phase 3, as
  above, and none of this applies any more. ]
So, please do give it a whirl, check out the documentation and code, and let
me know what you think. I'm looking forward to hearing your thoughts and
receiving bug reports, experience reports, and overall feedback.

More information about the WireGuard mailing list

All Comments: [-] | anchor

bob1029(10000) about 17 hours ago [-]

This is exciting to me. I have tripped over every VPN technology listed on Wikipedia at one point or another during my career. Always open to something better.

I think IPSec or OpenVPN are probably the opposite of what WG is offering here... Microsoft's SSTP offering is actually not causing me any major frustration at the moment. I almost like using it. But, seeing these other comments telling tales of 600 megabit VPN wifi experiences... I'll check it out for sure.

midasuni(10000) about 16 hours ago [-]

I had an sstp tunnel refuse to establish a few weeks ago. WireGuard was fine. Turns out the provider was MITMing tcp/443 traffic

Panino(10000) about 18 hours ago [-]

Very impressive performance:

> While performance is quite good right now (~7.5Gbps TX on my small test box), not a lot of effort has yet been spent on optimizing it

> Jonathan Tooker reported to me that, on his system with an Intel AC9560 WiFi card, he gets ~600Mbps without WireGuard, ~600Mbps with wireguard-go/Wintun over Ethernet, ~95Mbps with wireguard-go/Wintun over WiFi, and ~600Mbps with WireGuardNT over WiFi.

Congratulations to Simon and Jason! Very happy WireGuard user here.

jve(10000) about 7 hours ago [-]

People always compare bandwidth which is important.

Has anyone done any comparisons how latency is affected between various VPN implementations?

brian_herman(10000) about 16 hours ago [-]

Yes, I am gonna reinstall wireguard on my raspberry pi again. This is amazing news. And I will try and getting my windows server ryzen pc to be a router so I can benchmark all four configs.

zinekeller(10000) about 18 hours ago [-]

While the driver can be licensed under GPLv2 (all kernel drivers needs to be signed by Microsoft*, and VirtIO is a precedent¤ that you can do it), I'm not sure if the header should be licensed under GPLv2, mainly because it would stifle Wireguard adoption.

* In ordinary conditions. Test-sign mode does exist.

¤ ... for example, these Red Hat versions: https://www.catalog.update.microsoft.com/Search.aspx?q=Red%2...

Denvercoder9(10000) about 17 hours ago [-]

The header is dual-licensed under GPLv2 and MIT.

vetinari(10000) about 18 hours ago [-]

You can get them here: https://fedorapeople.org/groups/virt/virtio-win/direct-downl... packaged in a nice iso, ready to use for iso store of your hypervisor.

(It might be also slightly newer; v204 is

shawnz(10000) about 7 hours ago [-]

VirtIO changed license from GPL to BSD so that it could be signed by Microsoft. See here: https://github.com/virtio-win/kvm-guest-drivers-windows/comm...

kzrdude(10000) about 18 hours ago [-]

What is WireGuard, is it a new protocol? Or a new algorithm for implementing an existing thing? (Or something else)

Scaevolus(10000) about 17 hours ago [-]

Wireguard is a UDP-based VPN protocol that focuses on simplicity and security. Its Linux implementation is a mere 4000 LOC and the protocol has been formally verified. OpenVPN is over 100,000 lines of code PLUS OpenSSL.


nicoburns(10000) about 18 hours ago [-]

It's a VPN protocol whose USP is being dramatically simpler than OpenVPN, which should mean that it is both easier to use and more secure (and consensus seems to be that it generally delivers on both of those fronts).

dsr_(10000) about 18 hours ago [-]

wireguard is a VPN technology that is now integrated into the Linux kernel, and is available on all major platforms.

It distinguishes itself from other VPNs by not having knobs to twiddle. Should a security issue arise, it will be necessary to replace it with a wireguard2 or such. This also means that it's very hard to get it wrong in config; either it works or it doesn't, and if it doesn't, you haven't got it working yet.

It's very fast and very nice to work with.

tptacek(10000) about 16 hours ago [-]

I think you could reasonably look at WireGuard as a repudiation of previous VPN protocols, almost from root to branch.

For instance, WireGuard reconsiders what the role of a VPN 'protocol' actually is, and in WireGuard the protocol itself delivers a point-to-point secure tunnel and nothing else, so that the system is composable with multiple different upper-level designs (for instance, how you mesh up with multiple endpoints, or how you authenticate).

Another reasonable way to look at WireGuard is that it's the Signal Protocol-era VPN protocol (WireGuard is derived from Trevor Perrin's Noise protocol framework).

Notably: WireGuard doesn't attempt to negotiate cryptographic parameters. Instead, they've selected a good set of base primitives (Curve25519, Blake2, ChaPoly) and that's that; if those primitives ever change, they'll version the whole protocol.

If you haven't played with it, WireGuard is approximately as hard to set up as an SSH connection. It is really a breath of fresh air.

phkahler(10000) about 15 hours ago [-]

>> What is WireGuard?

According to Linus: '...compared to the horrors that are OpenVPN and IPSec, it's a work of art.'

stjohnswarts(10000) about 16 hours ago [-]

The wikipedia article for it is actually quite informative and not hard to read for the layperson. https://lmwtfy.joe.gl/?q=wireguard

Also Ars had a great article on it as well if you want a readable but more in depth version https://arstechnica.com/gadgets/2018/08/wireguard-vpn-review...

aborsy(10000) about 15 hours ago [-]

In some networks, I only have outgoing tcp ports 80 and 443.

Does anyone have experience with udp2raw or udptunnel?

PhantomGremlin(10000) about 6 hours ago [-]

This is just insane.

Everything except WWW is blocked, so everything must pretend to be WWW???!!!

So can anyone explain the purpose of the 'source port' and 'destination port' fields in the TCP header? :-)

Datagenerator(10000) about 7 hours ago [-]
roozbeh18(10000) about 14 hours ago [-]

WireGuard is so good, sometimes I forget I am on vpn and only realize it when downloading a large file that my speed is capped by my home speed.

AlexanderTheGr8(10000) about 6 hours ago [-]

I am curious. Which VPN do you use for daily activities?

no_time(10000) about 7 hours ago [-]

Will it be possible to fall back to the userspace implementation to use obfuscation software like shadowsocks? Or will it be deprecated?

Unfortunately the recent popularity means that almost all DPI software recognize the wireguard handshake.

PhantomGremlin(10000) about 6 hours ago [-]

almost all DPI software recognize the wireguard handshake.

Why should that matter? How does the DPI software get your keys? Isn't WireGuard data flow completely opaque to anyone or anything between endpoints?

If the DPI software blocks WireGuard packets, that's an entirely different discussion. It gets into the area of 'technical solutions' to fight 'administrative policy'.

riobard(10000) about 3 hours ago [-]

Jason previously stated in mailing list (couldn't find the thread now) that obfuscation is not a goal of WireGuard.

nixcraft(10000) about 19 hours ago [-]

I would like to see 2FA (app or security key) support built into WireGuard. Otherwise, it is perfect as compared to the OpenVPN mess.

jagger27(10000) about 18 hours ago [-]

Isn't that just a roundabout way of asking for PSK support (which it already has)?

yobert(10000) about 18 hours ago [-]

Think of wireguard as the plumbing. There will be a plethora of things available on top of wireguard that will enable all sorts of easy authentication options. (For example, TailScale.)

anonymousiam(10000) about 18 hours ago [-]

WireGuard is not MFA, but the user's private key could probably be stored in a smart-card instead of on disk. Software changes would need to be made so the key is read from the card instead of specified in the wgx.conf file.

To achieve true MFA, it would need either a password, TOTP, or SMS in addition to the stored keys.

idorosen(10000) about 18 hours ago [-]

WireGuard itself doesn't even handle its existing authentication fully -- you are expected to exchange peer public keys out of band. There are several projects that try to tackle this public key exchange. I think what you're asking for, indirectly, is support for certificate authority style authentication similar to how SSH CAs work, so that wireguard could authenticate tunnels using certificates with signed pubkeys instead of statically configured pubkeys themselves for each peer.

If the wireguard core included any kind of timed partial delegation of authority through key signatures (similar to what SSH allows now with cert-authority/CertificateFile), that'd be enough to build SMS/HOTP/TOTP 2FA, security keys, and much more on top of it.

drexlspivey(10000) about 17 hours ago [-]

Adding features like this that should be implemented on a different layer is the perfect way to turn it to the OpenVPN mess

stjohnswarts(10000) about 16 hours ago [-]

I think you'll have to use other options for that. I don't see them ever implementing 2FA as that is outside the goals of the project. They want to keep it has slim, performant, and on target as possible.

atonse(10000) about 18 hours ago [-]

Tailscale solves all these problems, including SSO.

Can you tell I'm a very happy customer?

EwanToo(10000) about 16 hours ago [-]

Pritunl has wireguard support and works well for this

ec109685(10000) about 14 hours ago [-]

Any thought if Windows will embed this natively similar to how Linux pulled WireGuard into the kernel?

YPPH(10000) about 11 hours ago [-]

Licensing issues aside, do we really want to rely on Microsoft to keep it up to date? I can imagine it becoming quickly outdated, particularly in enterprise skews.

I think it's best left to the Wireguard team and not Redmond.

MikusR(10000) about 3 hours ago [-]

Windows can load stuff like this dynamically and doesn't require everything to be compiled into kernel.

jiggawatts(10000) about 13 hours ago [-]

For reference, I've never seen the built-in Windows VPN protocols exceed ~70 Mbps in any scenario. Maybe it's possible with a crossover cable between two Mellanox 100 Gbps NICs, using water-cooled and overclocked CPUs, but not over ordinary networks with ordinary servers.

I have gigabit wired Internet to a site with gigabit Internet. Typical performance of SSTP or IKEv2 is 15-30 Mbps. That's 1.5% to 3% max utilisation of the available bandwidth, which is just... sad.

It's not the specific site either, other vendor VPNs can easily achieve > 300 Mbps over the same path.

It's a year and a half into the pandemic, there are record numbers of people working from home, and Microsoft is the world's second biggest company right now.

Meanwhile, volunteers put together a protocol in their spare time that is not only more secure but can also easily do 7.5 Gbps!

That needs to be repeated: At least ONE HUNDRED TIMES faster than the 'best' Microsoft can offer to their hundreds of millions of enterprise customers that are working from home.

Someone from Microsoft's networking team needs to read this, and then watch Casey Muratori's rant about Microsoft's poor track record with performance: https://www.youtube.com/watch?v=99dKzubvpKE

1vuio0pswjnm7(10000) about 12 hours ago [-]

'... with a crossover cable...'

Many years ago, I once brought a crossover cable from home to the office to do some data transfer from a workstation to a company-issued laptop. The IT department issuing the laptop, being lovers of all things Microsoft, claimed crossover cable was 'obsolete' due to auto-sensing used by Windows.

I am just another dumb end user, I do not work in IT, but I still get faster data transfer between two computers with crossover cable than by going through a third computer, or God forbid, over Wifi.

Sounds like crossover cable is not 'obsolete' after all. Who would have thought.

Microsoft's customers, e.g., IT departments, are arguably complicit in the sad 'state-of-the-art' you describe. The best software I have ever used was written by volunteers. Money can't buy everything. As Microsoft has shown, it can certainly buy customers.

pjmlp(10000) about 6 hours ago [-]

Not surprising at all, it is just not worthwhile doing from project management perspective, regardless what a bunch of people on Internet think about it.

Historical Discussions: Hosting SQLite Databases on GitHub Pages (July 31, 2021: 550 points)

(559) Hosting SQLite Databases on GitHub Pages

559 points 3 days ago by isnotchicago in 10000th position

phiresky.netlify.app | Estimated reading time – 15 minutes | comments | anchor

Hosting SQLite databases on Github Pages

(or any static file hoster)

Apr 17, 2021 • Last Update May 03, 2021

I was writing a tiny website to display statistics of how much sponsored content a Youtube creator has over time when I noticed that I often write a small tool as a website that queries some data from a database and then displays it in a graph, a table, or similar. But if you want to use a database, you either need to write a backend (which you then need to host and maintain forever) or download the whole dataset into the browser (which is not so great when the dataset is more than 10MB).

In the past when I've used a backend server for these small side projects at some point some external API goes down or a key expires or I forget about the backend and stop paying for whatever VPS it was on. Then when I revisit it years later, I'm annoyed that it's gone and curse myself for relying on an external service - or on myself caring over a longer period of time.

Hosting a static website is much easier than a 'real' server - there's many free and reliable options (like GitHub, GitLab Pages, Netlify, etc), and it scales to basically infinity without any effort.

So I wrote a tool to be able to use a real SQL database in a statically hosted website!

Here's a demo using the World Development Indicators dataset - a dataset with 6 tables and over 8 million rows (670 MiByte total).


select country_code, long_name from wdi_country limit 3;

As you can see, we can query the wdi_country table while fetching only 1kB of data!

This is a full SQLite query engine. As such, we can use e.g. the SQLite JSON functions:


select json_extract(arr.value, '$.foo.bar') as bar
  from json_each('[{'foo': {'bar': 123}}, {'foo': {'bar': 'baz'}}]') as arr

We can also register JS functions so they can be called from within a query. Here's an example with a getFlag function that gets the flag emoji for a country:

JS Demo

function getFlag(country_code) {
  // just some unicode magic
  return String.fromCodePoint(...Array.from(country_code||'')
    .map(c => 127397 + c.codePointAt()));

await db.create_function('get_flag', getFlag)
return await db.query(`
  select long_name, get_flag('2-alpha_code') as flag from wdi_country
    where region is not null and currency_unit = 'Euro';

Press the Run button to run the following demos. You can change the code in any way you like, though if you make a query too broad it will fetch large amounts of data ;)

Note that this website is 100% hosted on a static file hoster (GitHub Pages).

So how do you use a database on a static file hoster? Firstly, SQLite (written in C) is compiled to WebAssembly. SQLite can be compiled with emscripten without any modifications, and the sql.js lib